Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
24 changes: 12 additions & 12 deletions blocks/highlighted-tutorials.mdx
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
import plausible from "./assets/scaleway-plausible-tutorial-time.webp"
import sentry from "./assets/scaleway-sentry-tutorial-time.webp"
import kapsule from "./assets/scaleway-kubernetes-kapsule-tutorial-time.webp"
import object from "./assets/scaleway-object-storage-tutorial-time.webp"

<Grid>
<Card
image={plausible}
title="Running web analytics with Plausible on Ubuntu Linux"
tags="Analytics Plausible Ubuntu"
label="Open Plausible tutorial"
url="/tutorials/plausible-analytics-ubuntu"
image={kapsule}
title="Deploying a demo application on Scaleway Kubernetes Kapsule"
tags="Kubernetes Kapsule"
label="Open Kubernetes Kapsule tutorial"
url="/tutorials/deploy-demo-application-kubernetes-kapsule/"
/>
<Card
image={sentry}
title="Configuring Sentry error tracking"
tags="Sentry"
label="Open Sentry tutorial"
url="/tutorials/sentry-error-tracking"
image={object}
title="Build and deploy an MkDocs static website with GitHub Actions CI/CD"
tags="MkDocs"
label="Open MkDocs tutorial"
url="/tutorials/deploy-automate-mkdocs-site/"
/>
</Grid>
287 changes: 287 additions & 0 deletions tutorials/deploy-demo-application-kubernetes-kapsule/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,287 @@
---
title: Deploying a demo application on Scaleway Kubernetes Kapsule
description: This page shows you how to deploy a demo application on Scaleway Kubernetes Kapsule
tags: Kubernetes Kapsule k8S
products:
- kubernetes
dates:
validation: 2025-08-13
posted: 2025-08-13
validation_frequency: 12
---
import Requirements from '@macros/iam/requirements.mdx'


# Deploy an intermediate workload on Scaleway Kubernetes Kapsule

This tutorial guides you through deploying a demo application (`whoami`) on Scaleway Kubernetes Kapsule. You will create a managed Kubernetes cluster, deploy a sample application, configure an ingress controller for external access, set up auto-scaling, and test the setup.
This tutorial is designed for users with a basic understanding of Kubernetes concepts like pods, deployments, services, and ingress.

<Requirements />

- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- A [Scaleway API key](/iam/how-to/create-api-keys/) for details.
- Installed the tools `kubectl`, `scw`, and `helm` on your local computer
- Basic familiarity with Kubernetes concepts (Pods, Deployments, Services, Ingress).

## Configure Scaleway CLI

Configure the [Scaleway CLI (v2)](https://github.com/scaleway/scaleway-cli) to manage your Kubernetes Kapsule cluster.

1. Install the Scaleway CLI (if not already installed):

```bash
curl -s https://raw.githubusercontent.com/scaleway/scaleway-cli/master/scripts/get.sh | sh
```

2. Initialize the CLI with your API key:

```bash
scw init
```

Follow the prompts to enter your `SCW_ACCESS_KEY`, `SCW_SECRET_KEY`, and default region (e.g., `pl-waw` for Warsaw, Poland).

## Create a Kubernetes Kapsule cluster

Create a managed Kubernetes cluster using the Scaleway CLI.

1. Run the following command to create a cluster with a single node pool:

```bash
scw k8s cluster create name=demo-cluster version=1.32.7 pools.0.size=2 pools.0.node-type=DEV1-M pools.0.name=default pools.0.min-size=1 pools.0.max-size=3 pools.0.autoscaling=true region=pl-waw
```

- `version=1.32.7`: Specifies a [recent Kubernetes version](/kubernetes/reference-content/version-support-policy/#scaleway-kubernetes-products).
- `pools.0.size=2`: Starts with two nodes.
- `pools.0.min-size=1`, `pools.0.max-size=3`, `pools.0.autoscaling=true`: Enables node auto-scaling.
- `region=pl-waw`: Deploys in the Warsaw region.

2. Retrieve the cluster ID and download the kubeconfig file:

```bash
CLUSTER_ID=$(scw k8s cluster list | grep demo-cluster | awk '{print $1}')
scw k8s kubeconfig get $CLUSTER_ID > ~/.kube/demo-cluster-config
export KUBECONFIG=~/.kube/demo-cluster-config
```

<Message type="tip">
Alternatively, you can copy the cluster ID from the output after cluster creation and install the kubeconfig file using the following command:
```bash
scw k8s kubeconfig install <CLUSTER_ID>
```
</Message>

3. Verify cluster connectivity:

```bash
kubectl get nodes
```

Ensure all nodes are in the `Ready` state.

## Deploy a sample application

Deploy the [whoami](https://github.com/traefik/whoami) application (a well-known demo application to test cluster deployments) using a Kubernetes Deployment and Service.

1. Create a file named `whoami-deployment.yaml` with the following content:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami:latest
ports:
- containerPort: 80
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "200m"
memory: "256Mi"
---
apiVersion: v1
kind: Service
metadata:
name: whoami-service
namespace: default
spec:
selector:
app: whoami
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
```

2. Apply the configuration:

```bash
kubectl apply -f whoami-deployment.yaml
```

3. Verify the deployment and service:

```bash
kubectl get deployments
kubectl get pods
kubectl get services
```

## Configure an ingress controller

Expose the `whoami` application externally using an [Nginx ingress controller](/kubernetes/reference-content/lb-ingress-controller/).

<Message type="note">
Before proceeding, ensure the [Helm package manager](/tutorials/kubernetes-package-management-helm/) is installed on your local machine. If it is not already installed, you will need to set it up first.
</Message>

1. Install the Nginx ingress controller using Helm:

```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
```

2. Create a file named `whoami-ingress.yaml` with the following content:

```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: whoami.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-service
port:
number: 80
```

3. Apply the Ingress configuration:

```bash
kubectl apply -f whoami-ingress.yaml
```

4. Retrieve the external IP of the Ingress controller:

```bash
kubectl get svc -n ingress-nginx ingress-nginx-controller
```

## Set up auto-scaling

Configure [Horizontal Pod Autoscaling (HPA)](https://www.scaleway.com/en/blog/understanding-kubernetes-autoscaling/) to dynamically scale the `whoami` application based on CPU usage.

1. Create a file named `whoami-hpa.yaml` with the following content:

```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: whoami-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: whoami
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
```

2. Apply the HPA configuration:

```bash
kubectl apply -f whoami-hpa.yaml
```

3. Verify the HPA status:

```bash
kubectl get hpa
kubectl describe hpa whoami-hpa
```

## Test the application

1. Get the Ingress controller’s external IP:

```bash
INGRESS_IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
```

2. Test the application by sending an HTTP request (replace `whoami.example.com` with your domain or use the IP directly):

```bash
curl -H "Host: whoami.example.com" http://$INGRESS_IP
```

3. Simulate load to trigger auto-scaling (optional):

```bash
kubectl run -i --tty load-generator --image=busybox --restart=Never -- /bin/sh -c "while true; do wget -q -O- http://whoami-service.default.svc.cluster.local; done"
```

4. Open another terminal and monitor pod scaling:

```bash
kubectl get pods -w
kubectl get hpa -w
```

## Clean up

Delete the cluster to avoid unnecessary costs.

1. Delete the cluster:

```bash
scw k8s cluster delete $CLUSTER_ID
```

2. Confirm the cluster is deleted:

```bash
scw k8s cluster list
```

## Conclusion

This tutorial has guided you through the full lifecycle of a Kubernetes deployment, from creating a cluster and deploying an application to configuring ingress, enabling autoscaling, performing load testing, monitoring performance, and cleaning up resources.
You have completed the first steps to effectively manage cloud-native applications on Scaleway, with a focus on both manual resource control and automated scaling to build resilient, efficient, and scalable systems.