diff --git a/content/kubermatic/main/tutorials-howtos/networking/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/_index.en.md index 70151ec82..e73a5f165 100644 --- a/content/kubermatic/main/tutorials-howtos/networking/_index.en.md +++ b/content/kubermatic/main/tutorials-howtos/networking/_index.en.md @@ -15,5 +15,6 @@ This section provides guides on networking in KKP: - [**Manual CNI Migration**]({{< relref "./cni-migration/" >}}) - [**Multus-CNI Addon**]({{< relref "./multus/" >}}) - [**Multi-Cluster IP Address Management (IPAM)**]({{< relref "./ipam/" >}}) +- [**Network Policy**]({{< relref "./network-policy/" >}}) - [**Cilium Cluster Mesh on KKP**]({{< relref "./cilium-cluster-mesh/" >}}) - [**Using HTTP Proxy with KKP**]({{< relref "./httpproxy/" >}}) diff --git a/content/kubermatic/main/tutorials-howtos/networking/network-policy/_index.en.md b/content/kubermatic/main/tutorials-howtos/networking/network-policy/_index.en.md new file mode 100644 index 000000000..0133c9a3e --- /dev/null +++ b/content/kubermatic/main/tutorials-howtos/networking/network-policy/_index.en.md @@ -0,0 +1,235 @@ ++++ +title = "Network Policy" +date = 2025-10-13T12:00:00+03:00 +weight = 120 ++++ + +## Network Policy + +This document outlines the use of the standard Kubernetes `NetworkPolicy` resource as a primary security control at the IP address or port level (OSI layer 3 or 4) within the cluster. + + +By default, Kubernetes networking is flat and fully permissive. This means any pod can, by default, initiate network connections to any other pod within the same cluster. This "default-allow" posture presents a significant security challenge. Network Policies are Kubernetes' built-in firewall rules that control which pods can talk to each other. +They provide a declaration-based virtual firewall at the pod level. + +Kubermatic Kubernetes Platform (KKP) already supports NetworkPolic resources to ensure network isolation in the clusters. The enforcement of these policies is handled by the underlying CNI plugin, such as Cilium or Canal, which are supported by default in KKP. + +## Example: Deploy AI Workloads in KKP User Cluster + +{{% notice warning %}} +This example is for demonstration purposes only. Not suitable for production use. +{{% /notice %}} + +We'll demonstrate this concept by securing a LocalAI deployment (an OpenAI-compatible API) so that only authorized services can access it + +LocalAI can be deployed through the KKP UI from the Applications tab. When deploying, it creates pods with specific labels that we'll use for network isolation. For this example, we'll assume LocalAI is deployed with default settings in the `local-ai` namespace. + + +Similarly, deploy the Nginx Ingress Controller from the KKP's default Application Catalog. This will handle external traffic and route it to your AI services. + +{{% notice note %}} +LocalAI and Nginx Applications can be deployed via the KKP UI by navigating to your cluster's Applications tab and selecting LocalAI from the catalog. For detailed deployment instructions, refer to the [LocalAI Application documentation]({{< ref "../../../architecture/concept/kkp-concepts/applications/default-applications-catalog/local-ai/" >}}) and [Nginx Application documentation]({{< ref "../../../architecture/concept/kkp-concepts/applications/default-applications-catalog/nginx/" >}}) . +{{% /notice %}} + + +Before proceeding, ensure that `kubectl` is properly configured and points to your user cluster, not the seed cluster. +```bash +# Check the current context +kubectl config current-context +# or, set the context explicitly +export KUBECONFIG=$HOME/.kube/ +``` + +### Expose Your AI Service + +First, let's make LocalAI accessible through the Nginx Ingress Controller by creating an Ingress resource: + +```bash +kubectl apply -f - < Ingress endpoint: $INGRESS_ENDPOINT" +``` + +Before applying any Network Policies, let's see how accessible LocalAI is. Currently, any pod in the cluster can reach it. + +- External access through nginx: +```bash +$ curl -H "Host: localai.local" http://$INGRESS_ENDPOINT/v1/models +{"object":"list","data":[]}% +``` + +- In-cluster access across namespaces: +```bash +$ kubectl run test-connectivity -n nginx --image=gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 \ + --rm -i --restart=Never -- wget -T 5 -q -O - http://local-ai-local-ai.local-ai.svc.cluster.local:8080/v1/models +{"object":"list","data":[]} +``` + +- In-cluster access in the service's `local-ai` namespace: +```bash +$ kubectl run test-connectivity -n local-ai --image=gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 \ + --rm -i --restart=Never -- wget -T 5 -q -O - http://local-ai-local-ai.local-ai.svc.cluster.local:8080/v1/models +{"object":"list","data":[]} +``` +All three tests succeed because Kubernetes allows all connections by default. + +### Secure AI Workload Access + +Let's secure the LocalAI service using Network Policies. We'll implement a zero-trust model where the AI service is completely isolated except for legitimate traffic from the Nginx ingress controller. + +First, create a default-deny policy to block all incoming traffic to LocalAI: + +```bash +kubectl apply -f - <}}) +- [KKP Applications Documentation]({{< ref "../../../tutorials-howtos/applications/" >}}) + +For CNI-specific advanced features: +- [Cilium Network Policies](https://docs.cilium.io/en/stable/security/policy/) +- [Calico Network Policies](https://docs.tigera.io/calico/latest/network-policy/) + +## Conclusion + +Network Policies transform Kubernetes' open networking into a secure, controlled environment. By implementing zero-trust networking, you ensure only authorized services can access your workloads. + +For AI workloads and other sensitive services, this approach provides strong security and network isolations for critical workloads. Combined with KKP's Application Catalog and supported CNIs, you can quickly deploy and secure workloads.