Skip to content

Commit

Permalink
istio startup
Browse files Browse the repository at this point in the history
  • Loading branch information
huataihuang committed Jul 27, 2023
1 parent af606a7 commit 9b7b948
Show file tree
Hide file tree
Showing 21 changed files with 265 additions and 3 deletions.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -363,5 +363,6 @@ NVIDIA的GPU可观测性也是建立在 :ref:`prometheus` 基础上,构建的
=====

- `Integrating GPU Telemetry into Kubernetes <https://docs.nvidia.com/datacenter/cloud-native/gpu-telemetry/dcgm-exporter.html#integrating-gpu-telemetry-into-kubernetes>`_
- `Monitoring GPUs in Kubernetes with DCGM <https://developer.nvidia.com/blog/monitoring-gpus-in-kubernetes-with-dcgm/>`_
- `Prometheus + Grafana 监控 NVIDIA GPU <https://www.yaoge123.com/blog/archives/2709>`_ yaoge123 在 `共享Grafana dashboards <https://grafana.com/dashboards>`_ 提供了一个基于 :ref:`dcgm-exporter` 数据采集的Grafana面板 `GPU Nodes v2 <https://grafana.com/grafana/dashboards/11752-hpc-gpu-nodes-v2/>`_ ,比NVIDIA官方提供的面板 `NVIDIA DCGM Exporter Dashboard <https://grafana.com/grafana/dashboards/12239-nvidia-dcgm-exporter-dashboard/>`_
更多信息,不过我结合本文实践没有实现数据展示,需要再仔细研究研究。
86 changes: 86 additions & 0 deletions source/kubernetes/istio/istio_startup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,92 @@ Istio提供了多个 `Istio configuration profile <https://istio.io/latest/docs/

对于个人部署的 ``Barmetal`` 裸金属服务器,需要部署 :ref:`metallb` 来实现云厂商提供的外部负载均衡功能: :ref:`metallb_with_istio`

安装和配置 :ref:`metallb`
---------------------------

- 按照 :ref:`install_metallb` 步骤完成 ``MetalLB`` 安装(这里步骤不再复述)

- 创建 ``MetalLB`` 的IP资源池来对外提供服务:

.. literalinclude:: ../network/metallb/metallb_with_istio/y-k8s-ip-pool.yaml
:language: yaml
:caption: 创建 ``y-k8s-ip-pool.yaml`` 设置对外提供负载均衡服务的IP地址池

- 然后执行创建:

.. literalinclude:: ../network/metallb/metallb_with_istio/kubectl_create_metallb_ip-pool
:caption: 创建名为 ``y-k8s-ip-pool`` 的MetalLB地址池

- 一旦完成 ``MetalLB`` 的负载均衡服务的IP地址池,再次检查 Ingress 服务:

.. literalinclude:: istio_startup/kubectl_get_svc
:caption: 再次检查 ``istio-ingressgateway`` 的service(svc)

**Binggo** ,现在可以看到对外服务的IP地址已经分配:

.. literalinclude:: ../network/metallb/metallb_with_istio/kubectl_get_svc_metallb_ip_output
:caption: 可以看到完成 ``MetalLB`` 地址池配置后, ``istio-ingressgateway`` 的service(svc) 正确获得了对外服务负载均衡IP

获得访问URL
--------------

- 执行以下命令获得访问URL:

.. literalinclude:: istio_startup/get_ingress_ip_port_url
:caption: 获取Ingress的IP和Port,拼接出访问URL

按照上文实践,我获得的URL输出如下:

.. literalinclude:: istio_startup/get_ingress_ip_port_url_output
:caption: 获取Ingress的IP和Port,拼接出访问URL

.. note::

上述访问URL是我部署 ``zcloud`` 的内部网络IP地址池,所以对外 ``public`` 网段不能直接访问。有两种简便方式:

- 方法一: 通过 ``iptables`` 端口转发:

.. literalinclude:: istio_startup/iptables_port_forwarding_bookinfo
:language: bash
:caption: 通过 ``iptables`` 端口转发实现访问内网 Kubernetes 的 :ref:`metallb` 输出入口

- 方法二: 构建 :ref:`nginx_reverse_proxy`

Dashboard
================

Istio提供了一个Dashboard来实现可观测性

- 默认 ``kiali`` 采用了 ``ClusterIP`` ,所以外部无法访问,官方采用了 ``istioctl dashboard kiali`` 方式在 ``127.0.0.1`` 上开启 ``20001`` 端口访问(但是我是在服务器上部署,所以转换为 LoadBalancer 模式,结合 :ref:`metallb` 就非常容易实现访问)

- 修改 ``kubectl -n istio-system edit svc kiali`` ::

type: ClusterIP

改为::

type: LoadBalancer

此时 ``svc`` 从如下::

kiali ClusterIP 10.233.63.114 <none> 20001/TCP,9090/TCP 15m

自动修改成::

kiali LoadBalancer 10.233.63.114 192.168.8.152 20001:32561/TCP,9090:32408/TCP 20m

- 再次创建一个简单的端口转发脚本 ``iptables_port_forwarding_kiali`` :

.. literalinclude:: istio_startup/iptables_port_forwarding_kiali
:language: bash
:caption: 通过 ``iptables`` 端口转发实现访问 ``kiali``

则可以通过访问 http://10.1.1.111:20001/kiali 来访问 ``kiali`` 服务

此时就可以看到 ``kiali`` Dashboard:

.. figure:: ../../_static/kubernetes/istio/kiali_dashboard.png


参考
======
Expand Down
11 changes: 11 additions & 0 deletions source/kubernetes/istio/istio_startup/get_ingress_ip_port_url
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# 对于其他云环境,参考原文 https://istio.io/latest/docs/setup/getting-started/#determining-the-ingress-ip-and-ports

# Ingress host:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# Ingress port:
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')

# 获得实际访问GATEWAY URL
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo "$GATEWAY_URL"
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
192.168.8.151:80
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 假设public接口IP地址是 10.10.1.111 (模拟对internet提供服务)
local_host=10.10.1.111
bookinfo_port=80

istio_bookinfo_host=192.168.8.151
istio_bookinfo_port=80

sudo iptables -t nat -D PREROUTING -p tcp --dport ${bookinfo_port} -j DNAT --to-destination ${istio_bookinfo_host}:${istio_bookinfo_port}
sudo iptables -t nat -D POSTROUTING -p tcp -d ${istio_bookinfo_host} --dport ${istio_bookinfo_port} -j SNAT --to-source ${local_host}
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
local_host=10.1.1.111

kiali_port=20001

istio_kiali_host=192.168.8.152
istio_kiali_port=20001

sudo iptables -t nat -A PREROUTING -p tcp -d ${local_host} --dport ${kiali_port} -j DNAT --to-destination ${istio_kiali_host}:${istio_kiali_port}
sudo iptables -t nat -A POSTROUTING -p tcp -d ${istio_kiali_host} --dport ${istio_kiali_port} -j SNAT --to-source ${local_host}
3 changes: 2 additions & 1 deletion source/kubernetes/network/metallb/install_metallb.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ MetalLB 实现了 FRR 模式,该模式使用 FRR 容器作为处理 BGP 会话
准备工作
===============

如果使用 ``IPVS`` 模式的 ``kube-proxy`` ,从 Kubernetes v1.14.2 开始,必须激活严格的 ARP 模式。注意,如果使用 ``kube-router`` 作为 ``service-proxy`` 饿不需要这个步骤,因为已经默认激活了 ``strict ARP``
如果使用 ``IPVS`` 模式的 ``kube-proxy`` ,从 Kubernetes v1.14.2 开始,必须激活严格的 ARP 模式。注意,如果使用 ``kube-router`` 作为 ``service-proxy`` 则不需要这个步骤,因为已经默认激活了 ``strict ARP``

- 通过编辑当前集群的 ``kube-proxy`` 配置实现:

Expand All @@ -44,6 +44,7 @@ MetalLB 实现了 FRR 模式,该模式使用 FRR 容器作为处理 BGP 会话
.. literalinclude:: install_metallb/edit_configmap_enable_strict_arp_mode
:language: bash
:caption: 设置 ``ipvs`` 模式中 ``strictARP: true``
:emphasize-lines: 5

.. note::

Expand Down
Original file line number Diff line number Diff line change
@@ -1 +1 @@
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml
2 changes: 2 additions & 0 deletions source/kubernetes/network/metallb/intro_metallb.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ Kubernetes 不为裸机集群提供网络负载均衡器(LoadBalancer 类型

如果没有在受支持的 IaaS 平台(GCP、AWS、Azure……)上运行,LoadBalancers 在创建时将始终保持在 ``pending`` (挂起)状态。

Bare-metal集群operator通常只提供了非常简陋的 ``NodePort`` 和 ``externalIPs`` 服务,这两种方式都无法满足生产需求。 :ref:`metallb` 提供了类似标准网络设备集成的网络负载均衡实现,这样 Bare-metal集群 也能用于生产环境。

参考
========

Expand Down
78 changes: 78 additions & 0 deletions source/kubernetes/network/metallb/metallb_with_istio.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,84 @@
:ref:`istio` 上部署Metallb
================================

:ref:`istio_startup` ,如果是在自建的 ``baremetal`` 服务器集群,就会出现无法自动获取 ``Exteneral-IP`` 的现象。原因无他,就是因为Kubernetes默认是在云计算厂商平台部署,采用调用云厂商的Loadbalance来实现对外服务输出。而在 ``barmetal`` 裸金属服务器上部署的应用,则需要实现一个外部负载均衡,如 :ref:`metallb` 。

安装
========

- 我在 :ref:`kubespray_startup` 采用了默认安装,也就是依然采用 ``kube-proxy`` 。按照 :ref:`install_metallb` 官方说明,必须严格激活 ``strict ARP`` ,所以编辑当前集群的 ``kube-proxy`` 配置:

.. literalinclude:: install_metallb/enable_strict_arp_mode
:language: bash
:caption: 通过编辑 ``kube-proxy`` 配置激活 ``strict ARP``

设置:

.. literalinclude:: install_metallb/edit_configmap_enable_strict_arp_mode
:language: bash
:caption: 设置 ``ipvs`` 模式中 ``strictARP: true``
:emphasize-lines: 5

- 执行以下 ``manifest`` 完成MetalLB安装 :

.. literalinclude:: install_metallb/install_metallb_by_manifest
:language: bash
:caption: 使用Manifest方式安装MetalLB

上述操作在集群中的 ``metallb-system`` namespace 部署了MetalLB,在 manifest 中的组件包括:

- ``metallb-system/controller`` deployment: 这是用于处理IP地址分配的集群范围控制器
- ``metallb-system/speaker`` daemonset: 该组件负责选择可以到达服务的协议
- 用于 ``controller`` 和 ``speaker`` 的系统服务账号,归属于RBAC的功能组件

- 检查部署pods:

.. literalinclude:: metallb_with_istio/kubectl_get_metallb-system_pods
:caption: 检查 ``metallb-system`` 中的组件pods

可以看到如下pods分布在每个节点上:

.. literalinclude:: metallb_with_istio/kubectl_get_metallb-system_pods_output
:caption: 检查 ``metallb-system`` 中的组件pods分布情况

配置
======

:ref:`istio_startup` 已经部署了 :ref:`istio_bookinof_demo`

在没有部署 ``MetalLB`` 对外IP地址池的时候,检查 Ingress 服务:

.. literalinclude:: ../../istio/istio_startup/kubectl_get_svc
:caption: 检查 :ref:`istio_startup` 创建的 ``istio-ingressgateway`` 的service(svc)

此时看到 ``istio-ingressgateway`` 的 ``EXTERNAL-IP`` 是 ``pending`` 状态:

.. literalinclude:: ../../istio/istio_startup/kubectl_get_svc_output
:caption: 检查上文创建的名为 ``istio-ingressgateway`` 的service(svc): 输出显示 ``EXTERNAL-IP`` 没有分配

- 创建 ``MetalLB`` 的IP资源池来对外提供服务:

.. literalinclude:: metallb_with_istio/y-k8s-ip-pool.yaml
:language: yaml
:caption: 创建 ``y-k8s-ip-pool.yaml`` 设置对外提供负载均衡服务的IP地址池

- 然后执行创建:

.. literalinclude:: metallb_with_istio/kubectl_create_metallb_ip-pool
:caption: 创建名为 ``y-k8s-ip-pool`` 的MetalLB地址池

- 一旦完成 ``MetalLB`` 的负载均衡服务的IP地址池,再次检查 Ingress 服务:

.. literalinclude:: ../../istio/istio_startup/kubectl_get_svc
:caption: 再次检查 ``istio-ingressgateway`` 的service(svc)

**Binggo** ,现在可以看到对外服务的IP地址已经分配:

.. literalinclude:: metallb_with_istio/kubectl_get_svc_metallb_ip_output
:caption: 可以看到完成 ``MetalLB`` 地址池配置后, ``istio-ingressgateway`` 的service(svc) 正确获得了对外服务负载均衡IP

- 注意,这里我的模拟环境采用了一个内部IP地址 ``192.168.8.151`` ,我还加了一个 :ref:`nginx_reverse_proxy` 将 ``public`` 网络接口访问映射为内部IP上访问

参考
======

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
kubectl create -f y-k8s-ip-pool.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
kubectl -n metallb-system get pods -o wide
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
controller-5fd797fbf7-6x8lm 1/1 Running 0 2m56s 10.233.89.145 y-k8s-n-1 <none> <none>
speaker-9lxd5 1/1 Running 0 2m56s 192.168.8.119 y-k8s-n-1 <none> <none>
speaker-fmlzw 1/1 Running 0 2m56s 192.168.8.116 y-k8s-m-1 <none> <none>
speaker-g6f9d 1/1 Running 0 2m56s 192.168.8.118 y-k8s-m-3 <none> <none>
speaker-mpx6m 1/1 Running 0 2m56s 192.168.8.117 y-k8s-m-2 <none> <none>
speaker-x8k7x 1/1 Running 0 2m56s 192.168.8.120 y-k8s-n-2 <none> <none>
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.8.166 192.168.8.151 15021:31210/TCP,80:31659/TCP,443:30721/TCP,31400:32337/TCP,15443:30050/TCP 24h
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: y-k8s-ip-pool
namespace: metallb-system
spec:
addresses:
- 192.168.8.151-192.168.8.198
1 change: 1 addition & 0 deletions source/web/nginx/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ nginx官方网站提供O'REILLY发布的 Derek DeJonghe 撰写的 `Complete NGIN
nginx_wpad.rst
change_nginx_user.rst
nginx_reverse_proxy.rst
nginx_reverse_proxy_troubleshooting.rst
nginx_reverse_proxy_nodejs.rst
nginx_reverse_proxy_https.rst
nginx_redirect_url.rst
Expand Down
19 changes: 18 additions & 1 deletion source/web/nginx/nginx_reverse_proxy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,24 @@
Nginx反向代理
======================

我在 :ref:`nginx_reverse_proxy_nodejs` 简单实现了NGINX反向代理。实际上,在很多时候NGINX的反向代理配置非常实用,例如在 :ref:`grafana_behind_reverse_proxy` 为 :ref:`helm3_prometheus_grafana` 通过 ``NodePort`` 快速输出
我在 :ref:`nginx_reverse_proxy_nodejs` 简单实现了NGINX反向代理。实际上,在很多时候NGINX的反向代理配置非常实用,例如:

- :ref:`grafana_behind_reverse_proxy` 为 :ref:`helm3_prometheus_grafana` 通过 ``NodePort`` 快速输出
- 为 :ref:`metallb_with_istio` 构建一个反向代理,使得public网络接口能够将流量转发给内网 :ref:`kubernetes` 集群上透过 :ref:`metallb` 输出的服务 **本文**

简单的反向代理
====================

- 在 ``/etc/nginx/sites-available/`` 目录下 配置一个基于域名的 ``vhost`` 配置 ``book-info`` :

.. literalinclude:: nginx_reverse_proxy/book-info
:caption: 设置基于域名 ``vhost`` 反向代理到后端 :ref:`metallb_with_istio` 输出的WEB服务

- 在 ``/etc/nginx/sites-enabled/`` 为其建立软连接以激活配置:

.. literalinclude:: nginx_reverse_proxy/sites-enabled
:language: bash
:caption: 在 ``/etc/nginx/sites-enabled/`` 为其建立软连接以激活配置

参考
======
Expand Down
15 changes: 15 additions & 0 deletions source/web/nginx/nginx_reverse_proxy/book-info
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
upstream book-info {
server 192.168.8.151:80;
}

server {
listen 80;
#listen [::]:80;

server_name book-info book-info.cloud-atlas.io;

location / {
proxy_set_header Host $http_host;
proxy_pass http://book-info;
}
}
2 changes: 2 additions & 0 deletions source/web/nginx/nginx_reverse_proxy/sites-enabled
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
site=book-info
ln -s /etc/nginx/sites-available/${site} /etc/nginx/sites-enabled/${site}
10 changes: 10 additions & 0 deletions source/web/nginx/nginx_reverse_proxy_troubleshooting.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
.. _nginx_reverse_proxy_troubleshooting:

========================
NGINX反向代理故障排查
========================

参考
======

- `NGINX Reverse Proxy Configuration and Troubleshooting <https://djangocas.dev/blog/nginx-reverse-proxy-configuration-and-troubleshooting/>`_

0 comments on commit 9b7b948

Please sign in to comment.