diff --git a/tutorials/ceph-cluster/assets/scaleway-ceph_s3.webp b/tutorials/ceph-cluster/assets/scaleway-ceph_s3.webp
new file mode 100644
index 0000000000..67488eb8cf
Binary files /dev/null and b/tutorials/ceph-cluster/assets/scaleway-ceph_s3.webp differ
diff --git a/tutorials/ceph-cluster/assets/scaleway_ceph.webp b/tutorials/ceph-cluster/assets/scaleway_ceph.webp
new file mode 100644
index 0000000000..959175bb65
Binary files /dev/null and b/tutorials/ceph-cluster/assets/scaleway_ceph.webp differ
diff --git a/tutorials/ceph-cluster/index.mdx b/tutorials/ceph-cluster/index.mdx
new file mode 100644
index 0000000000..df4612b766
--- /dev/null
+++ b/tutorials/ceph-cluster/index.mdx
@@ -0,0 +1,227 @@
+---
+meta:
+ title: Building your own Ceph distributed storage cluster on dedicated servers
+ description: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines.
+content:
+ h1: Building your own Ceph distributed storage cluster on dedicated servers
+ paragraph: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines.
+categories:
+ - object-storage
+ - dedibox
+tags: dedicated-servers dedibox Ceph object-storage
+hero: assets/scaleway_ceph.webp
+dates:
+ validation: 2025-06-25
+ validation_frequency: 18
+ posted: 2020-06-29
+---
+
+Ceph is an open-source, software-defined storage solution that provides object, block, and file storage at exabyte scale. It is self-healing, self-managing, and fault-tolerant, using commodity hardware to minimize costs.
+This tutorial guides you through deploying a three-node Ceph cluster with a RADOS Gateway (RGW) for S3-compatible object storage on Dedibox dedicated servers running Ubuntu 24.04 LTS.
+
+
+
+- A Dedibox account logged into the [console](https://console.online.net)
+- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- 3 Dedibox servers (Ceph nodes) running Ubuntu 24.04 LTS, each with:
+ - At least 8GB RAM, 4 CPU cores, and one unused data disk (e.g., /dev/sdb) for OSDs.
+ - Network connectivity between nodes and the admin machine.
+- An admin machine (Ubuntu 24.04 LTS recommended) with SSH access to Ceph nodes.
+
+## Configure networking and SSH
+
+1. Log into each of the ceph nodes and the admin machine using SSH
+
+2. Install software dependencies on all nodes and the admin machine:
+ ```
+ sudo apt update
+ sudo apt install -y python3 chrony lvm2 podman
+ sudo systemctl enable chrony
+ ```
+
+3. Set unique hostnames on each Ceph node:
+ ```
+ sudo hostnamectl set-hostname ceph-node-a # Repeat for ceph-node-b, ceph-node-c
+ ```
+
+4. Configure `/etc/hosts` on all nodes and the admin machine to resolve hostnames:
+ ```
+ echo " ceph-node-a" | sudo tee -a /etc/hosts
+ echo " ceph-node-b" | sudo tee -a /etc/hosts
+ echo " ceph-node-c" | sudo tee -a /etc/hosts
+ ```
+
+5. Create a deployment user (cephadm) on each Ceph node:
+ ```
+ sudo useradd -m -s /bin/bash cephadm
+ sudo passwd cephadm
+ echo "cephadm ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadm
+ sudo chmod 0440 /etc/sudoers.d/cephadm
+ ```
+
+6. Enable passwordless SSH from the admin machine to Ceph nodes:
+ ```
+ ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519
+ ssh-copy-id cephadm@ceph-node-a
+ ssh-copy-id cephadm@ceph-node-b
+ ssh-copy-id cephadm@ceph-node-c
+ ```
+
+7. Verify time synchronization on all nodes:
+ ```
+ chronyc sources
+ ```
+
+## Install cephadm on the admin machine
+
+1. Add the Ceph repository for the latest stable release (e.g., Reef or newer):
+ ```
+ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo tee /etc/apt/trusted.gpg.d/ceph.asc
+ sudo apt-add-repository 'deb https://download.ceph.com/debian-squid/ jammy main'
+ sudo apt update
+ ```
+
+2. Install `cephadm`:
+ ```
+ sudo apt install -y cephadm
+ ```
+
+3. Verify the installation:
+ ```
+ cephadm --version
+ ```
+
+
+## Bootstrap the Ceph cluster
+
+1. Bootstrap the cluster on the admin machine, using the admin node’s IP:
+ ```
+ sudo cephadm bootstrap \
+ --mon-ip \
+ --initial-dashboard-user admin \
+ --initial-dashboard-password \
+ --dashboard-ssl
+ ```
+
+2. Access the Ceph dashboard at `https://:8443` to verify the setup.
+
+
+## Add Ceph nodes to the cluster
+
+1. Add each Ceph node to the cluster:
+ ```
+ sudo cephadm orch host add ceph-node-a
+ sudo cephadm orch host add ceph-node-b
+ sudo cephadm orch host add ceph-node-c
+ ```
+
+2. Verify hosts:
+ ```
+ sudo ceph orch host ls
+ ```
+
+
+
+### Deploy Object Storage devices (OSDs)
+
+1. List available disks on each node:
+ ```
+ sudo ceph orch device ls
+ ```
+
+2. Deploy OSDs on all available unused disks:
+ ```
+ sudo ceph orch apply osd --all-available-devices
+ ```
+
+3. Verify the OSD deployment:
+ ```
+ sudo ceph osd tree
+ ```
+
+
+### Deploy RADOS gateway (RGW)
+
+1. Deploy a single RGW instance on ceph-node-a:
+ ```
+ sudo ceph orch apply rgw default --placement="count:1 host:ceph-node-a" --port=80
+ ```
+
+2. To use HTTPS (recommended), generate a self-signed certificate:
+ ```
+ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
+ -keyout /etc/ceph/private/rgw.key \
+ -out /etc/ceph/private/rgw.crt \
+ -subj "/CN=ceph-node-a"
+ ```
+
+3. Redeploy RGW with HTTPS:
+ ```
+ sudo ceph orch apply rgw default \
+ --placement="count:1 host:ceph-node-a" \
+ --port=443 \
+ --ssl-cert=/etc/ceph/private/rgw.crt \
+ --ssl-key=/etc/ceph/private/rgw.key
+ ```
+
+4. Verify RGW by accessing http://ceph-node-a:80 (or https://ceph-node-a:443 for HTTPS).
+
+## Create an RGW user
+
+1. Create a user for S3-compatible access:
+ ```
+ sudo radosgw-admin user create \
+ --uid=johndoe \
+ --display-name="John Doe" \
+ --email=john@example.com
+ ```
+
+ Note the generated `access_key` and `secret_key` from the output.
+
+
+## Step 8: Configure AWS-CLI for Object Storage
+
+1. Install AWS-CLI on the admin machine or the "a" client:
+ ```
+ pip3 install awscli awscli-plugin-endpoint
+ ```
+
+2. Create a configuration file `~/.aws/config`:
+ ```
+ [plugins]
+ endpoint = awscli_plugin_endpoint
+
+ [default]
+ region = default
+ s3 =
+ endpoint_url = http://ceph-node-a:80
+ signature_version = s3v4
+ s3api =
+ endpoint_url = http://ceph-node-a:80
+
+ For HTTPS, use https://ceph-node-a:443.
+
+ Create ~/.aws/credentials:
+ [default]
+ aws_access_key_id=
+ aws_secret_access_key=
+ ```
+
+3. Test the setup:
+ ```sh
+ aws s3 mb s3://mybucket --endpoint-url http://ceph-node-a:80
+ echo "Hello Ceph!" > testfile.txt
+ aws s3 cp testfile.txt s3://mybucket --endpoint-url http://ceph-node-a:80
+ aws s3 ls s3://mybucket --endpoint-url http://ceph-node-a:80
+ ```
+
+4. Verify the cluster health status:
+ ```
+ sudo ceph -s
+ ```
+
+ Ensure the output shows `HEALTH_OK`.
+
+## Conclusion
+
+You have deployed a Ceph storage cluster with S3-compatible object storage using three Dedibox servers on Ubuntu 24.04 LTS. The cluster is managed with `cephadm`, ensuring modern orchestration and scalability. For advanced configurations (e.g., multi-zone RGW, monitoring with Prometheus), refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/).
\ No newline at end of file