diff --git a/tutorials/easydeploy-gitlab-server/index.mdx b/tutorials/easydeploy-gitlab-server/index.mdx index 5d3a2023b0..c15a291459 100644 --- a/tutorials/easydeploy-gitlab-server/index.mdx +++ b/tutorials/easydeploy-gitlab-server/index.mdx @@ -9,7 +9,7 @@ tags: GitLab server kubernetes easy deploy categories: - containers dates: - validation: 2024-06-20 + validation: 2025-01-02 posted: 2024-06-20 validation_frequency: 24 --- diff --git a/tutorials/install-ispconfig/index.mdx b/tutorials/install-ispconfig/index.mdx index 3c3d056481..40f3a0baff 100644 --- a/tutorials/install-ispconfig/index.mdx +++ b/tutorials/install-ispconfig/index.mdx @@ -10,11 +10,13 @@ categories: - domains-and-dns tags: hosting ISPconfig Ubuntu Linux dates: - validation: 2024-06-25 + validation: 2025-01-02 posted: 2019-01-25 --- -ISPConfig is an open-source, transparent, free, stable, and secure administration tool, available in more than 20 languages. ISPConfig simplifies the management of various web hosting services such as DNS configuration, domain name management, email, or FTP file transfer. It can be used to manage a single server, multiple servers for larger setups, or even mirrored clusters. +ISPConfig is an open-source, transparent, free, stable, and secure administration tool, available in more than 20 languages. +ISPConfig simplifies the management of various web hosting services such as DNS configuration, domain name management, email, or FTP file transfer. +It can be used to manage a single server, multiple servers for larger setups, or even mirrored clusters. @@ -30,12 +32,12 @@ ISPConfig is an open-source, transparent, free, stable, and secure administratio 1. [Log into your Instance](/compute/instances/how-to/connect-to-instance/) via SSH using the root account. 2. Update and upgrade the software already installed on the Instance. - ``` + ```bash apt update && apt upgrade -y ``` 3. Configure the hostname of your Instance. * Open the `/etc/hosts` file in a text editor and ensure it looks like the following example: `IP address - space - subdomain.domain.tld - space - subdomain`. - ```s + ``` 127.0.0.1 localhost.localdomain localhost # This line should be changed to the correct servername: 127.0.1.1 server1.example.com server1 @@ -50,11 +52,11 @@ ISPConfig is an open-source, transparent, free, stable, and secure administratio * Edit the file `/etc/hostname` and make sure it contains only the subdomain part of the hostname (e.g. `server1`). 4. Reboot the Instance to apply the hostname configuration. - ``` + ```bash systemctl reboot ``` 5. Login again and check the hostname configuration using the following commands: - ``` + ```bash hostname hostname -f ``` @@ -66,13 +68,13 @@ ISPConfig is an open-source, transparent, free, stable, and secure administratio Ensure a corresponding DNS record (A and/or AAAA) for the subdomain exists in your DNS zone and is pointing to the IP of your Instance. -6. Download and run the ISPConfig auto-installer to install the panel with Nginx web server, a port range for passive FTP and unattended upgrades: - ``` +6. Download and run the ISPConfig auto-installer to install the panel with Nginx web server, a port range for passive FTP, and unattended upgrades: + ```bash wget -O - https://get.ispconfig.org | sh -s -- --use-nginx --use-ftp-ports=40110-40210 --unattended-upgrades ``` - If you want to install the Apache web server instead of Nginx run the following command instead: - ``` + If you want to install the Apache web server instead of Nginx, run the following command instead: + ```bash wget -O - https://get.ispconfig.org | sh -s -- --use-ftp-ports=40110-40210 --unattended-upgrades ``` @@ -91,7 +93,7 @@ ISPConfig is an open-source, transparent, free, stable, and secure administratio ``` -## Configuring the firewall +## Configuring the firewall 1. Open the ISPConfig UI (e.g. `https://server1.example.com:8080`) and log in with your credentials. @@ -111,4 +113,4 @@ ISPConfig is an open-source, transparent, free, stable, and secure administratio 6. Once you have added the necessary records, save the firewall configuration. -You now have finished the basic configuration of your server using ISPConfig. For further information and avanced configuration of your server, refer to the [official ISPConfig documentation](https://www.ispconfig.org/documentation/).con \ No newline at end of file +You now have finished the basic configuration of your server using ISPConfig. For further information and advanced configuration of your server, refer to the [official ISPConfig documentation](https://www.ispconfig.org/documentation/). diff --git a/tutorials/k8s-gitlab/index.mdx b/tutorials/k8s-gitlab/index.mdx index 016e3a3bfb..682b65e266 100644 --- a/tutorials/k8s-gitlab/index.mdx +++ b/tutorials/k8s-gitlab/index.mdx @@ -10,7 +10,7 @@ categories: - kubernetes - instances dates: - validation: 2024-06-17 + validation: 2025-01-02 posted: 2020-06-09 --- @@ -33,14 +33,22 @@ In this tutorial, you will learn how to use the `gitlab` Kubernetes integration In this tutorial, we use `helm` to deploy a `gitlab` runner on a `Kapsule` cluster. -If you do not know how to install `helm`, follow the [tutorial](https://helm.sh/docs/intro/install/) on the official `helm` website. +Ensure you're using the latest version of Helm, which is Helm 3.16.2. -In the example below we have successfully installed `helm` version 3.2.0. +#### Installation Steps: -``` -helm version -version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"} -``` +1. Download Helm + ```bash + curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 + chmod 700 get_helm.sh + ./get_helm.sh + ``` + +2. Verify the installation: + ```bash + helm version + ``` + Ensure the output shows version 3.16.2. The `helm` charts are provided through repositories. By default `helm` 3 does not have any repository configured. We will add the `gitlab` repository, as it provides the necessary chart to install the runner. @@ -60,7 +68,7 @@ A `helm` chart is always shipped with a `value.yaml` file. It can be edited to c In this part of the tutorial we customize the `value.yaml` to fit our needs and deploy the runner on `kapsule`. -1. Get the value.yaml: +1. Fetch the latest `values.yaml` file: ``` wget https://gitlab.com/gitlab-org/charts/gitlab-runner/-/raw/main/values.yaml ``` @@ -71,31 +79,26 @@ In this part of the tutorial we customize the `value.yaml` to fit our needs and -2. Fill the `value.yaml` with: - - the gitlabUrl (in our case `http://212.47.xxx.yyy/`) - - the registration token - - enable `rbac` - - - By default, the gitlabUrl and the registration token lines are written as a comment in the `values.yaml`file. Make sure you have deleted the `#` before saving. - +2. Open the file in a text editor and update the following fields in `values.yaml`: ```yaml - [..] - gitlabUrl: http://212.47.xxx.yyy/ - runnerRegistrationToken: "t7u_qjh3EFJX2-yPypkz" + gitlabUrl: http:/// + runnerRegistrationToken: "" rbac: create: true - [..] serviceAccountName: default ``` + Ensure you replace `` and `` with your actual GitLab instance URL and registration token. - We will use a dedicated [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). -3. To install the `gitlab` runner, create it on your `Kapsule` cluster: + + By default, the gitlabUrl and the registration token lines are written as a comment in the `values.yaml`file. Make sure you have deleted the `#` before saving. + + +3. To install the `gitlab` runner, create it on your `Kapsule` cluster. We will use a dedicated [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). ``` kubectl create ns gitlab-runner - namespace/gitlab-runner created ``` + The following output displays `namespace/gitlab-runner created`. The default service account should use a new Kubernetes [role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-example), and [rolebinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-example). 4. Use the following example to create a role and role binding and associate it to the default service account in the `gitlab-runner` namespace: @@ -121,7 +124,7 @@ In this part of the tutorial we customize the `value.yaml` to fit our needs and ``` 5. Use the `helm` command to install the runner (note that you specify in this command line the `values.yaml` file): ``` - helm install --namespace gitlab-runner gitlab-runner -f ./values.yaml gitlab/gitlab-runner + helm install --namespace gitlab-runner gitlab-runner -f ./values.yaml gitlab/gitlab-runner --version 0.68.1 NAME: gitlab-runner LAST DEPLOYED: Wed May 6 15:48:20 2020 NAMESPACE: gitlab-runner @@ -131,7 +134,7 @@ In this part of the tutorial we customize the `value.yaml` to fit our needs and NOTES: Your GitLab Runner should now be registered against the GitLab instance reachable at: "http://212.47.xxx.yyy/" ``` - + The command above installs the GitLab Runner Helm chart version 0.68.1 in the `gitlab-runner` namespace. You can check the runner is working in the `gitlab` console ("admin area" > runners): diff --git a/tutorials/k8s-velero-backup/index.mdx b/tutorials/k8s-velero-backup/index.mdx index 846850de0a..fa43a23f4a 100644 --- a/tutorials/k8s-velero-backup/index.mdx +++ b/tutorials/k8s-velero-backup/index.mdx @@ -1,16 +1,16 @@ --- meta: title: Back up your Kapsule cluster on Object Storage with Velero - description: Learn how to configure Velero to back up your Kubernetes Kapsule cluster on Object Storage in this tutorial. + description: Learn how to configure Velero to back up your Kubernetes Kapsule cluster on Scaleway Object Storage in this tutorial. content: h1: Back up your Kapsule cluster on Object Storage with Velero - paragraph: Learn how to configure Velero to back up your Kubernetes Kapsule cluster on Object Storage in this tutorial. + paragraph: Learn how to configure Velero to back up your Kubernetes Kapsule cluster on Scaleway Object Storage in this tutorial. tags: velero k8s kubernetes kapsule object-storage categories: - kubernetes - object-storage dates: - validation: 2024-06-25 + validation: 2025-01-02 posted: 2023-06-02 --- diff --git a/tutorials/librenms-monitoring/index.mdx b/tutorials/librenms-monitoring/index.mdx index 287513881e..359ec9b727 100644 --- a/tutorials/librenms-monitoring/index.mdx +++ b/tutorials/librenms-monitoring/index.mdx @@ -1,19 +1,19 @@ --- meta: - title: Monitoring Instances with LibreNMS on Ubuntu Focal Fossa + title: Monitoring Instances with LibreNMS on Ubuntu Noble Numbat (24.04) description: Learn how to monitor Instances using LibreNMS, an open-source PHP/MySQL network monitoring system. content: - h1: Monitoring Instances with LibreNMS on Ubuntu Focal Fossa + h1: Monitoring Instances with LibreNMS on Ubuntu Noble Numbat (24.04) paragraph: Learn how to monitor Instances using LibreNMS, an open-source PHP/MySQL network monitoring system. tags: LibreNMS Ubuntu Focal-Fossa categories: - instances dates: - validation: 2024-06-25 + validation: 2025-01-02 posted: 2019-07-04 --- -LibreNMS is a fully-featured network monitoring system that supports a wide range of network hardware and operating systems including Linux and Windows. As well as network equipment made by Cisco, Juniper, Foundry, and many more. +Learn how to use LibreNMS to monitor Instances on Ubuntu 24.04 (Noble Numbat). LibreNMS is an open-source network monitoring system supporting a wide range of network hardware and operating systems, including Linux and Windows. The software is based on PHP and MySQL (MariaDB) and is a community-based fork of the last GPL-licensed version of Observium. @@ -22,49 +22,38 @@ The software is based on PHP and MySQL (MariaDB) and is a community-based fork o - A Scaleway account logged into the [console](https://console.scaleway.com) - [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization - An [SSH key](/identity-and-access-management/organizations-and-projects/how-to/create-ssh-key/) -- An [Instance](/compute/instances/how-to/create-an-instance/) running on Ubuntu Focal Fossa Beaver +- An [Instance](/compute/instances/how-to/create-an-instance/) running on Ubuntu Noble Numbat (24.04) - A [domain or subdomain](/network/domains-and-dns/quickstart/) pointed to your Instance ## Installing LibreNMS -1. Update the apt repository information. +1. Update the apt package cache and upgrate the already installed system packages: ``` - apt update + apt update && apt upgrade -y ``` 2. Install the required packages. ``` - apt install software-properties-common - add-apt-repository universe - apt update - apt install acl curl composer fping git graphviz imagemagick mariadb-client mariadb-server mtr-tiny nginx-full nmap php7.4-cli php7.4-curl php7.4-fpm php7.4-gd php7.4-json php7.4-mbstring php7.4-mysql php7.4-snmp php7.4-xml php7.4-zip python3-memcache python3-mysqldb python3-pip rrdtool snmp snmpd whois + apt install -y acl curl composer fping git graphviz imagemagick mariadb-client mariadb-server mtr-tiny nginx-full nmap php-cli php-curl php-fpm php-gd php-json php-mbstring php-mysql php-snmp php-xml php-zip python3-memcache python3-mysqldb python3-pip rrdtool snmp snmpd whois ``` 3. Create a user for LibreNMS: ``` - useradd librenms -d /opt/librenms -M -r - usermod -a -G librenms www-data - ``` -4. Enter the directory `/opt` and download LibreNMS: + useradd -r -M -d /opt/librenms librenms + usermod -aG librenms www-data ``` - cd /opt - git clone https://github.com/librenms/librenms.git - ``` -5. Set the permissions: +4. Download and configure LibreNMS: ``` + git clone https://github.com/librenms/librenms.git /opt/librenms chown -R librenms:librenms /opt/librenms chmod 770 /opt/librenms setfacl -d -m g::rwx /opt/librenms/rrd /opt/librenms/logs /opt/librenms/bootstrap/cache/ /opt/librenms/storage/ setfacl -R -m g::rwx /opt/librenms/rrd /opt/librenms/logs /opt/librenms/bootstrap/cache/ /opt/librenms/storage/ ``` -6. Switch into the `librenms` user: - ``` - su - librenms - ``` -7. Install the PHP dependencies: + +5. Install PHP dependencies: ``` + sudo -u librenms bash + cd /opt/librenms ./scripts/composer_wrapper.php install --no-dev - ``` -8. Logout from the user session: - ``` exit ``` @@ -88,8 +77,7 @@ LibreNMS stores the data collected from the monitored systems in a MySQL databas FLUSH PRIVILEGES; exit ``` - - Replace `` with a secure password of your choice. + Replace `` with a secure password of your choice. 4. Open the file `/etc/mysql/mariadb.conf.d/50-server.cnf` in a text editor, for example `nano`: ``` nano /etc/mysql/mariadb.conf.d/50-server.cnf diff --git a/tutorials/manage-database-instance-pgadmin4/index.mdx b/tutorials/manage-database-instance-pgadmin4/index.mdx index 38f176d749..5b3600c010 100644 --- a/tutorials/manage-database-instance-pgadmin4/index.mdx +++ b/tutorials/manage-database-instance-pgadmin4/index.mdx @@ -10,11 +10,11 @@ categories: - compute - postgresql-and-mysql dates: - validation: 2024-06-25 + validation: 2025-01-02 posted: 2019-10-28 --- -[pgAdmin](https://www.pgadmin.org/) is an open-source management tool for PostgreSQL databases. It allows the management of your [Scaleway Database Instances](https://www.scaleway.com/en/database/) and other PostgreSQL databases through an easy-to-use web-interface within your web browser. +pgAdmin is an open-source management tool for PostgreSQL databases. It allows the management of your [Scaleway Database Instances](https://www.scaleway.com/en/database/) and other PostgreSQL databases through an easy-to-use web-interface within your web browser. @@ -29,14 +29,12 @@ dates: 1. [Connect to your Instance](/compute/instances/how-to/connect-to-instance/) via SSH. 2. Update the `apt` sources and the software already installed on the Instance: - ``` apt update && apt upgrade -y ``` -3. Import the PostgreSQL [repository key](https://www.postgresql.org/media/keys/ACCC4CF8.asc): +3. Import the PostgreSQL repository signing key: ``` - sudo apt-get install curl ca-certificates gnupg - sudo curl https://www.pgadmin.org/static/packages_pgadmin_org.pub | sudo apt-key add + curl -fsS https://www.pgadmin.org/static/packages_pgadmin_org.pub | sudo gpg --dearmor -o /usr/share/keyrings/packages-pgadmin-org.gpg ``` 4. Add the PostgreSQL repository to the APT package manager, by configuring the file `/etc/apt/sources.list.d/pgdg.list`: ``` diff --git a/tutorials/manage-k8s-logging-loki/index.mdx b/tutorials/manage-k8s-logging-loki/index.mdx index ee33b22796..b4f614b13b 100644 --- a/tutorials/manage-k8s-logging-loki/index.mdx +++ b/tutorials/manage-k8s-logging-loki/index.mdx @@ -9,13 +9,13 @@ tags: Grafana Loki Kubernetes logs categories: - kubernetes dates: - validation: 2024-06-20 + validation: 2025-01-02 posted: 2019-11-06 --- - Kubernetes Kapsule is fully integrated with Scaleway's [Observability Cockpit](/observability/cockpit/quickstart/). - You can [monitor your cluster](/containers/kubernetes/how-to/monitor-cluster/) directly from the cluster's dashboard, eliminating the need to set up your own monitoring solution. + Kubernetes Kapsule is fully integrated with Scaleway's [Observability Cockpit](/observability/cockpit/quickstart/). + You can [monitor your cluster](/containers/kubernetes/how-to/monitor-cluster/) directly from the cluster's dashboard, eliminating the need to set up your own monitoring solution. The following content is provided for informational purposes only. @@ -31,97 +31,56 @@ Loki is a log aggregation system inspired by **Prometheus**. It is easy to opera - Configured [kubectl](/containers/kubernetes/how-to/connect-cluster-kubectl/) on your machine - Installed `helm` (version 3.2+), the Kubernetes [packet manager](https://helm.sh/), on your local machine - - The `loki` application is not included in the default Helm repositories. - Since December 2020, Loki's Helm charts have been moved from their initial location within the Loki repository to their new location at [https://github.com/grafana/helm-charts](https://github.com/grafana/helm-charts). - - 1. Add the Grafana repository to Helm and update it. - ``` - helm repo add grafana https://grafana.github.io/helm-charts - helm repo update - ``` - - Which returns: - - ``` - "grafana" has been added to your repositories - Hang tight while we grab the latest from your chart repositories... - ...Successfully got an update from the "loki" chart repository - ...Successfully got an update from the "grafana" chart repository - Update Complete. ⎈Happy Helming!⎈ - ``` -2. Install all the stack in a Kubernetes dedicated [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) named `loki-stack`, using Helm. It must be deployed to your cluster and persistence must be enabled (allow Helm to create a Scaleway block device and attach it to the Loki pod to store its data) using a Kubernetes [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to survive a pod re-schedule: - ``` - helm install loki-stack grafana/loki-stack \ - --create-namespace \ - --namespace loki-stack \ - --set promtail.enabled=true,loki.persistence.enabled=true,loki.persistence.size=100Gi - ``` - - It will use Kapsule's default storage class, `scw-bsdd`, to create block volumes using Scaleway Block Storage. - - - You must enter a size for the persistent volume that fits the amount of volume your deployment will create. - - - - If you plan to use Loki on a production system, make sure that you set up a retention period to avoid filling the file systems. Use these parameters to enable a 30-day retention (logs older than 30 days will be deleted), for example. - - - - `config.table_manager.retention_deletes_enabled` : true - - `config.table_manager.retention_period`: 720h -3. Install Grafana in the loki-stack namespace with Helm. Enable persistence to ensure Grafana remains stable in the event of a re-schedule. - - `persistence.enabled`: true - - `persistence.type`: pvc - - `persistence.size`: 10Gi - - ```bash - helm install loki-grafana grafana/grafana \ - --set persistence.enabled=true,persistence.type=pvc,persistence.size=10Gi \ - --namespace=loki-stack - ``` - - You can check if the block devices were correctly created by Kubernetes: - - ```bash no-copy - kubectl get pv,pvc -n loki-stack - - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - persistentvolume/pvc-88038939-24a5-4383-abe8-f3aab97b7ce7 10Gi RWO Delete Bound loki-stack/loki-grafana scw-bssd 18s - persistentvolume/pvc-c6fce993-a73d-4423-9464-7c10ab009062 100Gi RWO Delete Bound loki-stack/storage-loki-stack-0 scw-bssd 4m30s - - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - persistentvolumeclaim/loki-grafana Bound pvc-88038939-24a5-4383-abe8-f3aab97b7ce7 10Gi RWO scw-bssd 19s - persistentvolumeclaim/storage-loki-stack-0 Bound pvc-c6fce993-a73d-4423-9464-7c10ab009062 100Gi RWO scw-bssd 5m3s - ``` -4. Check if the pods are correctly running. - ```bash no-copy - kubectl get pods -n loki-stack - - NAME READY STATUS RESTARTS AGE - loki-grafana-67994589cc-7jq4t 0/1 Running 0 74s - loki-stack-0 1/1 Running 0 5m58s - loki-stack-promtail-dtf5v 1/1 Running 0 5m42s - ``` -5. Get the admin password. - ``` - kubectl get secret --namespace loki-stack loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo - ``` -6. Configure a `port-forward` to reach Grafana from your web browser: - ```bash no-copy - kubectl port-forward --namespace loki-stack service/loki-grafana 3000:80 - Forwarding from 127.0.0.1:3000 -> 3000 - Forwarding from [::1]:3000 -> 3000 - ``` -7. Access `http://localhost:3000` to reach the Grafana interface. Log in using the admin user and the password you got above. - -8. Click **Configuration** > **Data Sources** in the side menu. -9. Click **+ Add Data Source**. -10. Select Loki. -11. Add the Loki source to Grafana (`http://loki-stack.loki-stack:3100`). - -12. Check you can access your logs using the explore tab in Grafana: - - -You now have a Loki stack up and running. All your pod's logs will be stored in Loki and you will be able to view and query your Kubernetes logs in Grafana. Refer to the [Grafana documentation](https://grafana.com/docs/features/datasources/loki/), if you want to learn more about querying the Loki data source. \ No newline at end of file + ``` + helm repo add grafana https://grafana.github.io/helm-charts + helm repo update + ``` +2. Install Loki in a Kubernetes dedicated namespace named `loki-stack` with persistence enabled: + ``` + helm install loki grafana/loki-distributed \ + --create-namespace \ + --namespace loki-stack \ + --set storage_config.aws.s3.force_path_style=true \ + --set storage_config.aws.s3.endpoint=s3.scaleway.com \ + --set persistence.enabled=true,persistence.size=100Gi + ``` +3. Install Promtail for log collection: + ``` + helm install promtail grafana/promtail \ + --namespace loki-stack \ + --set "config.clients[0].url=http://loki:3100/loki/api/v1/push" + ``` +4. Install Grafana with persistence enabled: + ``` + helm install grafana grafana/grafana \ + --namespace loki-stack \ + --set persistence.enabled=true,persistence.size=10Gi + ``` +5. Check if the pods are running correctly: +``` +kubectl get pods -n loki-stack +``` +6. Get the admin password for Grafana: + ``` + kubectl get secret --namespace loki-stack grafana -o jsonpath="{.data.admin-password}" | base64 --decode + ``` +7. Configure port-forwarding to access Grafana from your browser: + ``` + kubectl port-forward --namespace loki-stack service/grafana 3000:80 + ``` +8. Access `http://localhost:3000` in your browser. Use the admin username and password retrieved earlier. + +9. Add Loki as a data source: + - Go to **Configuration** > **Data Sources**. + - Click **+ Add Data Source** and select **Loki**. + - Enter the Loki URL: `http://loki.loki-stack:3100`. + +10. Verify the logs using Grafana's **Explore** tab. + ``` + retention: + period: 30d + deletes_enabled: true + ``` + +You now have Loki, Promtail, and Grafana running in your Kubernetes cluster. Logs from your pods are stored in Loki and can be queried in Grafana. Refer to the[Grafana documentation](https://grafana.com/docs/features/datasources/loki/) for advanced queries and visualization options. \ No newline at end of file diff --git a/tutorials/mist-streaming-server/index.mdx b/tutorials/mist-streaming-server/index.mdx index 18964e5d49..9f7031c9eb 100644 --- a/tutorials/mist-streaming-server/index.mdx +++ b/tutorials/mist-streaming-server/index.mdx @@ -1,20 +1,20 @@ --- meta: - title: Deploying a Mist Open Source Streaming Server + title: Deploying a Mist open source streaming server description: Explore how to deploy Mist, a streaming server solution, for broadcasting video content over the internet. content: - h1: Deploying a Mist Open Source Streaming Server + h1: Deploying a Mist open source streaming server paragraph: Explore how to deploy Mist, a streaming server solution, for broadcasting video content over the internet. categories: - compute tags: streaming mist-server Mist OBS hero: assets/scaleway_mistserver.webp dates: - validation: 2024-06-25 + validation: 2025-01-02 posted: 2020-07-01 --- -MistServer is one of the leading OTT (Internet Streaming) toolkits with an open source core. It allows you to deliver your media content to your users via the internet. Mist Server supports the OBS Studio suite, making it easy to set up your own web stream. +MistServer is one of the leading OTT (internet streaming) toolkits with an open source core. It allows you to deliver your media content to your users via the internet. Mist Server supports the OBS Studio suite, making it easy to set up your own web stream. diff --git a/tutorials/mlx-array-framework-apple-silicon/index.mdx b/tutorials/mlx-array-framework-apple-silicon/index.mdx index 2b3790610f..238f5faa1c 100644 --- a/tutorials/mlx-array-framework-apple-silicon/index.mdx +++ b/tutorials/mlx-array-framework-apple-silicon/index.mdx @@ -9,7 +9,7 @@ categories: - apple-silicon tags: apple-silicon mlx framework apple mac-mini dates: - validation: 2024-06-25 + validation: 2025-01-02 posted: 2023-12-15 --- diff --git a/tutorials/monitor-k8s-grafana/index.mdx b/tutorials/monitor-k8s-grafana/index.mdx index c9992b98ba..a3d016448f 100644 --- a/tutorials/monitor-k8s-grafana/index.mdx +++ b/tutorials/monitor-k8s-grafana/index.mdx @@ -10,7 +10,7 @@ categories: tags: kubernetes kapsule Prometheus monitoring Grafana hero: assets/scaleway_grafana.webp dates: - validation: 2024-06-17 + validation: 2025-01-02 posted: 2020-03-18 --- @@ -23,6 +23,10 @@ This tutorial will explain how to monitor your [Kubernetes Kapsule](https://www. - _[kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)_: `kube-state-metrics` listens to the Kubernetes API server and generates metrics about the state of the objects. The list of the exported metrics is available [here](https://github.com/kubernetes/kube-state-metrics/tree/master/docs). For instance, `kube-state-metrics` can report the number of pods ready (kube_pod_status_ready), or the number of unschedulable pods (kube_pod_status_unschedulable). - _[node-exporter](https://github.com/prometheus/node_exporter)_: The `node-exporter` is a Prometheus exporter for hardware and OS metrics exposed by the Linux Kernel. It allows you to get metrics about CPU, memory, file system for each Kubernetes node. + + Instead of setting up everything manually as described in this tutorial, you can use [Scaleway Cockpit](https://www.scaleway.com/en/cockpit/) to monitor your Kubernetes cluster easily and without additional configuration. + + - A Scaleway account logged into the [console](https://console.scaleway.com) @@ -52,7 +56,7 @@ We are first going to deploy the `Prometheus` stack in a dedicated Kubernetes [n ``` helm install prometheus prometheus-community/prometheus --create-namespace --namespace monitoring --set server.persistentVolume.size=100Gi,server.retention=30d NAME: prometheus - LAST DEPLOYED: Fri Oct 9 16:35:50 2020 + LAST DEPLOYED: Thu Jan 9 14:30:50 2025 NAMESPACE: monitoring STATUS: DEPLOYED [..] @@ -66,14 +70,6 @@ We are first going to deploy the `Prometheus` stack in a dedicated Kubernetes [n pod/prometheus-node-exporter-fbg6s 1/1 Running 0 67s pod/prometheus-pushgateway-6d75c59b7b-6knfd 1/1 Running 0 67s pod/prometheus-server-556dbfdfb5-rx6nl 1/2 Running 0 67s - - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - persistentvolume/pvc-5a9def3b-22a1-4545-9adb-72823b899c36 100Gi RWO Delete Bound monitoring/prometheus-server scw-bssd 67s - persistentvolume/pvc-c5e24d9b-3a69-46c1-9120-b16b7adf73e9 2Gi RWO Delete Bound monitoring/prometheus-alertmanager scw-bssd 67s - - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - persistentvolumeclaim/prometheus-alertmanager Bound pvc-c5e24d9b-3a69-46c1-9120-b16b7adf73e9 2Gi RWO scw-bssd 68s - persistentvolumeclaim/prometheus-server Bound pvc-5a9def3b-22a1-4545-9adb-72823b899c36 100Gi RWO scw-bssd 68s ``` 3. To access `Prometheus` use the Kubernetes [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) feature: ``` diff --git a/tutorials/object-storage-s3fs/index.mdx b/tutorials/object-storage-s3fs/index.mdx index 9074b3f0e6..2825b61f89 100644 --- a/tutorials/object-storage-s3fs/index.mdx +++ b/tutorials/object-storage-s3fs/index.mdx @@ -1,23 +1,19 @@ --- meta: - title: Using Object Storage with s3fs - description: Learn how to use s3fs as a client for Object Storage in this step-by-step tutorial. + title: Using Scaleway Object Storage with s3fs + description: Learn how to use s3fs as a client for Scaleway Object Storage in this step-by-step tutorial. content: - h1: Using Object Storage with s3fs - paragraph: Learn how to use s3fs as a client for Object Storage in this step-by-step tutorial. + h1: Using Scaleway Object Storage with s3fs + paragraph: Learn how to use s3fs as a client for Scaleway Object Storage in this step-by-step tutorial. tags: object-storage s3fs categories: - object-storage dates: - validation: 2024-06-25 + validation: 2025-01-02 posted: 2018-07-16 --- -In this tutorial you learn how to use [s3fs](https://github.com/s3fs-fuse/s3fs-fuse) as a client for [Scaleway Object Storage](/storage/object/concepts/#object-storage). `s3fs` is a FUSE-backed file interface for S3, allowing you to mount your Object Storage buckets on your local Linux or macOS operating system. `s3fs` preserves the native object format for files, so they can be used with other tools including AWS CLI. - - - The version of `s3fs` available for installation using the systems package manager does not support files larger than 10 GB. It is therefore recommended to compile a version, including the required corrections, from the s3fs source code repository. This tutorial will guide you through that process. Note that even with the source code compiled version of s3fs, there is a [maximum file size of 128 GB](#configuring-s3fs) when using s3fs with Scaleway Object Storage. - +In this tutorial you learn how to use [s3fs](https://github.com/s3fs-fuse/s3fs-fuse) as a client for [Scaleway Object Storage](/storage/object/concepts/#object-storage). `s3fs` is a FUSE-backed file interface for S3, allowing you to mount Object Storage buckets on your local Linux or macOS system. Files are preserved in their native object format, enabling compatibility with tools like AWS CLI. @@ -27,101 +23,96 @@ In this tutorial you learn how to use [s3fs](https://github.com/s3fs-fuse/s3fs-f ## Installing s3fs -### Dependencies - -Start by installing the dependencies of `s3fs-fuse` by executing the following commands, depending on your operating system: - - - On **Debian and Ubuntu**, from the command line: - ```bash - apt update && apt upgrade -y - apt -y install automake autotools-dev fuse g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config - ``` - - On **CentOS**, from the command line: - ```bash - dnf update - dnf install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel - ``` - - On **macOS**, via [Homebrew](https://brew.sh): - ``` - brew install --cask osxfuse - brew install autoconf automake pkg-config gnutls libgcrypt nettle git - ``` +### Option 1: Install via package manager + +#### Debian/Ubuntu +```bash +sudo apt update && sudo apt upgrade -y +sudo apt install -y s3fs +``` + +#### CentOS/RHEL +```bash +sudo dnf update -y +sudo dnf install -y epel-release +sudo dnf install -y s3fs-fuse +``` + +#### macOS (via Homebrew) +```bash +brew install s3fs +``` + +### Option 2: Compile from source + +If the version provided by the package manager does not meet your requirements or you require special features, compile the tool from source. + +#### Install dependencies +* Debian/Ubuntu: + ```bash + sudo apt install -y automake autotools-dev fuse g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config + ``` +* CentOS/RHEL: + ```bash + sudo dnf install -y automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel + ``` +* macOS: + ```bash + brew install autoconf automake pkg-config gnutls libgcrypt nettle git + brew install --cask macfuse + ``` + +#### Build and install + +Run the following commands to download the latest version of `s3fs` and build and install it on your machine: +```bash +git clone https://github.com/s3fs-fuse/s3fs-fuse.git +cd s3fs-fuse +./autogen.sh +./configure +make +sudo make install +``` +## Configuring s3fs +1. Create a credentials file: + ```bash + echo ACCESS_KEY:SECRET_KEY > $HOME/.passwd-s3fs + chmod 600 $HOME/.passwd-s3fs + ``` - On macOS, you need to add permissions to FUSE. Go to the `Settings > Security & Privacy > General` tab to allow the extension. + Replace `ACCESS_KEY` and `SECRET_KEY` with your Scaleway credentials. -### s3fs-fuse - -Next, download and install `s3fs-fuse` itself: - -1. Download the Git repository of `s3fs-fuse`: - ```bash - git clone https://github.com/s3fs-fuse/s3fs-fuse.git - ``` -2. Enter the s3fs-fuse directory: - ```bash - cd s3fs-fuse - ``` -3. Update the `MAX_MULTIPART_CNT` value in the `fdcache_entity.cpp` file: - - On **Linux**: - - ```bash - sed -i 's/MAX_MULTIPART_CNT = 10 /MAX_MULTIPART_CNT = 1 /' src/fdcache_entity.cpp - ``` - - - On **macOS**: - - ```bash - sed -i '' -e 's/MAX_MULTIPART_CNT = 10 /MAX_MULTIPART_CNT = 1 /' src/fdcache_entity.cpp - ``` -4. Run the `autogen.sh` script to generate a configuration file, configure the application, and compile it from the master branch: - ```bash - ./autogen.sh - ./configure - make - ``` -5. Run the installation of the application using the `make install` command: - ```bash - make install - ``` -6. Copy the application into its final destination to complete the installation: - ```bash - cp ~/s3fs-fuse/src/s3fs /usr/local/bin/s3fs - ``` - -## Configuring s3fs - -1. Execute the following commands to enter your credentials (separated by a `:`) in a file `$HOME/.passwd-s3fs` and set owner-only permissions. This presumes that you have set your [API credentials](/identity-and-access-management/iam/how-to/create-api-keys/) as environment variables named `ACCESS_KEY` and `SECRET_KEY`: - ``` - echo $ACCESS_KEY:$SECRET_KEY > $HOME/.passwd-s3fs - chmod 600 $HOME/.passwd-s3fs - ``` -2. Execute the following commands to create a file system from an existing bucket. Make the following replacements in the command text: - - Replace `$SCW-BUCKET-NAME` with the name of your Object Storage bucket and `$FOLDER-TO-MOUNT` with the local folder to mount it in. - - Replace the `endpoint` parameter with the location of your bucket (`fr-par` for Paris, `nl-ams` for Amsterdam, or `pl-waw` for Warsaw). - - Replace `s3.fr-par.scw.cloud` with the address of the storage cluster of your bucket. It can either be `s3.nl-ams.scw.cloud` (Amsterdam, The Netherlands), `s3.fr-par.scw.cloud` (Paris, France), or `s3.pl-waw.scw.cloud` (Warsaw, Poland). - ``` - s3fs $SCW-BUCKET-NAME $FOLDER-TO-MOUNT -o allow_other -o passwd_file=$HOME/.passwd-s3fs -o use_path_request_style -o endpoint=fr-par -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o url=https://s3.fr-par.scw.cloud - ``` - - The flag `-o multipart_size=128` sets the chunk (file-part) size for [multipart uploads](/storage/object/concepts/#multipart-uploads) to 128 MB. This value allows you to upload files up to a maximum file size of 128 GB. Lower values will give you better performances. You can set it to: - - A minimum chunk size of 5 MB, to increase performance (Maximum file size: 5 GB) - - A maximum chunk size of 5000 MB, to increase the maximum file size (Maximum file size: 5 TB) - - - - - You must carry out the command as root for the `allow_other` argument to be allowed. - -3. Add the following line to `/etc/fstab` to mount the file system on boot. Replace `s3.fr-par.scw.cloud` with the address corresponding to your bucket's geographical location: - ``` - s3fs/#[bucket_name] /mount-point fuse _netdev,allow_other,use_path_request_style,url=https://s3.fr-par.scw.cloud/ 0 0 - ``` - -## Using Object Storage with s3fs - -The file system of the mounted bucket will appear in your OS like a local file system. This means you can access the files as if they were on your hard drive. +2. Create a mount point directory: + ```bash + mkdir -p /path/to/mountpoint + ``` + +3. Mount the bucket: + ```bash + s3fs BUCKET_NAME /path/to/mountpoint \ + -o allow_other \ + -o passwd_file=$HOME/.passwd-s3fs \ + -o use_path_request_style \ + -o endpoint=fr-par \ + -o parallel_count=15 \ + -o multipart_size=128 \ + -o nocopyapi \ + -o url=https://s3.fr-par.scw.cloud + ``` + Replace: + - `BUCKET_NAME`: Your bucket name. + - `/path/to/mountpoint`: Path to mount the bucket. + - `fr-par`: Replace with your bucket's region (`nl-ams`, `pl-waw`). + - `s3.fr-par.scw.cloud`: Replace with the endpoint URL matching the region of your buckket. + +4. Configure automount on boot by adding the following line to `/etc/fstab`: + ```fstab + s3fs#BUCKET_NAME /path/to/mountpoint fuse _netdev,allow_other,use_path_request_style,url=https://s3.fr-par.scw.cloud 0 0 + ``` + +Once mounted, the bucket behaves like a local filesystem. You can copy, move, and manage files directly. Note that there are some limitations when using Object Storage as a file system: @@ -130,4 +121,11 @@ Note that there are some limitations when using Object Storage as a file system: - Eventual consistency can temporarily yield stale data - No atomic renames of files or directories - No coordination between multiple clients mounting the same bucket -- No hard links. \ No newline at end of file +- No hard links. + +## Troubleshooting Tips + +- **Permission Issues**: Ensure `/etc/fuse.conf` contains `user_allow_other`. +- **Mount Failures**: Check logs with `dmesg` or `/var/log/syslog`. +- **Performance**: Adjust `multipart_size` (e.g., 5 MB for faster uploads, 5000 MB for larger files). +- **Reconnection on Network Loss**: Remount manually or automate reconnections with a script. diff --git a/tutorials/scaleway-packer-plugin/index.mdx b/tutorials/scaleway-packer-plugin/index.mdx index bebbb3f6ec..18ded69f1b 100644 --- a/tutorials/scaleway-packer-plugin/index.mdx +++ b/tutorials/scaleway-packer-plugin/index.mdx @@ -10,7 +10,7 @@ categories: - instances tags: packer images instances dates: - validation: 2024-06-25 + validation: 2025-01-02 posted: 2023-06-06 --- diff --git a/tutorials/waypoint-plugin-scaleway/index.mdx b/tutorials/waypoint-plugin-scaleway/index.mdx index 4bef918741..fba31293d9 100644 --- a/tutorials/waypoint-plugin-scaleway/index.mdx +++ b/tutorials/waypoint-plugin-scaleway/index.mdx @@ -15,7 +15,7 @@ dates: posted: 2023-06-15 --- -Waypoint is an open-source tool developed by HashiCorp that focuses on simplifying the deployment and release workflows for applications. +Waypoint is an open source tool developed by HashiCorp that focuses on simplifying the deployment and release workflows for applications. The main goal of Waypoint is to abstract away the complexities of different deployment targets and provide a consistent interface for developers and operators. It allows developers to deploy, manage, and observe their applications using a simple and declarative configuration file.