Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support ClickHouse deployment with Persistent Volume #3608

Merged
merged 1 commit into from May 6, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
45 changes: 24 additions & 21 deletions build/yamls/flow-visibility.yml
Expand Up @@ -86,26 +86,27 @@ data:
UInt64,\n reverseThroughputFromDestinationNode UInt64,\n trusted
UInt8 DEFAULT 0\n ) engine=MergeTree\n ORDER BY (timeInserted, flowEndSeconds)\n
\ TTL timeInserted + INTERVAL 1 HOUR\n SETTINGS merge_with_ttl_timeout =
3600;\n\n CREATE MATERIALIZED VIEW flows_pod_view\n ENGINE = SummingMergeTree\n
\ ORDER BY (\n timeInserted,\n flowEndSeconds,\n flowEndSecondsFromSourceNode,\n
\ flowEndSecondsFromDestinationNode,\n sourcePodName,\n destinationPodName,\n
\ destinationIP,\n destinationServicePortName,\n flowType,\n
\ sourcePodNamespace,\n destinationPodNamespace)\n TTL timeInserted
+ INTERVAL 1 HOUR\n SETTINGS merge_with_ttl_timeout = 3600\n POPULATE\n
\ AS SELECT\n timeInserted,\n flowEndSeconds,\n flowEndSecondsFromSourceNode,\n
\ flowEndSecondsFromDestinationNode,\n sourcePodName,\n destinationPodName,\n
\ destinationIP,\n destinationServicePortName,\n flowType,\n
\ sourcePodNamespace,\n destinationPodNamespace,\n sum(octetDeltaCount)
AS octetDeltaCount,\n sum(reverseOctetDeltaCount) AS reverseOctetDeltaCount,\n
\ sum(throughput) AS throughput,\n sum(reverseThroughput) AS reverseThroughput,\n
\ sum(throughputFromSourceNode) AS throughputFromSourceNode,\n sum(throughputFromDestinationNode)
AS throughputFromDestinationNode\n FROM flows\n GROUP BY\n timeInserted,\n
3600;\n\n CREATE MATERIALIZED VIEW IF NOT EXISTS flows_pod_view\n ENGINE
= SummingMergeTree\n ORDER BY (\n timeInserted,\n flowEndSeconds,\n
\ flowEndSecondsFromSourceNode,\n flowEndSecondsFromDestinationNode,\n
\ sourcePodName,\n destinationPodName,\n destinationIP,\n
\ destinationServicePortName,\n flowType,\n sourcePodNamespace,\n
\ destinationPodNamespace)\n TTL timeInserted + INTERVAL 1 HOUR\n SETTINGS
merge_with_ttl_timeout = 3600\n POPULATE\n AS SELECT\n timeInserted,\n
\ flowEndSeconds,\n flowEndSecondsFromSourceNode,\n flowEndSecondsFromDestinationNode,\n
\ sourcePodName,\n destinationPodName,\n destinationIP,\n
\ destinationServicePortName,\n flowType,\n sourcePodNamespace,\n
\ destinationPodNamespace;\n\n CREATE MATERIALIZED VIEW flows_node_view\n
\ ENGINE = SummingMergeTree\n ORDER BY (\n timeInserted,\n flowEndSeconds,\n
\ destinationPodNamespace,\n sum(octetDeltaCount) AS octetDeltaCount,\n
\ sum(reverseOctetDeltaCount) AS reverseOctetDeltaCount,\n sum(throughput)
AS throughput,\n sum(reverseThroughput) AS reverseThroughput,\n sum(throughputFromSourceNode)
AS throughputFromSourceNode,\n sum(throughputFromDestinationNode) AS throughputFromDestinationNode\n
\ FROM flows\n GROUP BY\n timeInserted,\n flowEndSeconds,\n
\ flowEndSecondsFromSourceNode,\n flowEndSecondsFromDestinationNode,\n
\ sourcePodName,\n destinationPodName,\n destinationIP,\n
\ destinationServicePortName,\n flowType,\n sourcePodNamespace,\n
\ destinationPodNamespace;\n\n CREATE MATERIALIZED VIEW IF NOT EXISTS
flows_node_view\n ENGINE = SummingMergeTree\n ORDER BY (\n timeInserted,\n
\ flowEndSeconds,\n flowEndSecondsFromSourceNode,\n flowEndSecondsFromDestinationNode,\n
\ sourceNodeName,\n destinationNodeName,\n sourcePodNamespace,\n
\ destinationPodNamespace)\n TTL timeInserted + INTERVAL 1 HOUR\n SETTINGS
merge_with_ttl_timeout = 3600\n POPULATE\n AS SELECT\n timeInserted,\n
Expand All @@ -120,9 +121,9 @@ data:
AS reverseThroughputFromDestinationNode\n FROM flows\n GROUP BY\n timeInserted,\n
\ flowEndSeconds,\n flowEndSecondsFromSourceNode,\n flowEndSecondsFromDestinationNode,\n
\ sourceNodeName,\n destinationNodeName,\n sourcePodNamespace,\n
\ destinationPodNamespace;\n\n CREATE MATERIALIZED VIEW flows_policy_view\n
\ ENGINE = SummingMergeTree\n ORDER BY (\n timeInserted,\n flowEndSeconds,\n
\ flowEndSecondsFromSourceNode,\n flowEndSecondsFromDestinationNode,\n
\ destinationPodNamespace;\n\n CREATE MATERIALIZED VIEW IF NOT EXISTS
flows_policy_view\n ENGINE = SummingMergeTree\n ORDER BY (\n timeInserted,\n
\ flowEndSeconds,\n flowEndSecondsFromSourceNode,\n flowEndSecondsFromDestinationNode,\n
\ egressNetworkPolicyName,\n egressNetworkPolicyRuleAction,\n ingressNetworkPolicyName,\n
\ ingressNetworkPolicyRuleAction,\n sourcePodNamespace,\n destinationPodNamespace)\n
\ TTL timeInserted + INTERVAL 1 HOUR\n SETTINGS merge_with_ttl_timeout =
Expand All @@ -145,7 +146,7 @@ data:
\ ORDER BY (timeCreated);\n \nEOSQL\n"
kind: ConfigMap
metadata:
name: clickhouse-mounted-configmap-dkbmg82ctg
name: clickhouse-mounted-configmap-58fkkt9b56
namespace: flow-visibility
---
apiVersion: v1
Expand Down Expand Up @@ -4934,12 +4935,14 @@ spec:
value: default.flows
- name: MV_NAMES
value: default.flows_pod_view default.flows_node_view default.flows_policy_view
- name: STORAGE_SIZE
value: 8Gi
image: projects.registry.vmware.com/antrea/flow-visibility-clickhouse-monitor:latest
imagePullPolicy: IfNotPresent
name: clickhouse-monitor
volumes:
- configMap:
name: clickhouse-mounted-configmap-dkbmg82ctg
name: clickhouse-mounted-configmap-58fkkt9b56
name: clickhouse-configmap-volume
- emptyDir:
medium: Memory
Expand Down
25 changes: 0 additions & 25 deletions build/yamls/flow-visibility/base/clickhouse.yml
Expand Up @@ -45,32 +45,7 @@ spec:
volumeMounts:
- name: clickhouse-configmap-volume
mountPath: /docker-entrypoint-initdb.d
- name: clickhouse-storage-volume
mountPath: /var/lib/clickhouse
- name: clickhouse-monitor
image: flow-visibility-clickhouse-monitor
env:
- name: CLICKHOUSE_USERNAME
valueFrom:
secretKeyRef:
name: clickhouse-secret
key: username
- name: CLICKHOUSE_PASSWORD
valueFrom:
secretKeyRef:
name: clickhouse-secret
key: password
- name: DB_URL
value: "tcp://localhost:9000"
- name: TABLE_NAME
value: "default.flows"
- name: MV_NAMES
value: "default.flows_pod_view default.flows_node_view default.flows_policy_view"
volumes:
- name: clickhouse-configmap-volume
configMap:
name: $(CLICKHOUSE_CONFIG_MAP_NAME)
- name: clickhouse-storage-volume
emptyDir:
medium: Memory
sizeLimit: 8Gi
Expand Up @@ -72,7 +72,7 @@ clickhouse client -n -h 127.0.0.1 <<-EOSQL
TTL timeInserted + INTERVAL 1 HOUR
SETTINGS merge_with_ttl_timeout = 3600;

CREATE MATERIALIZED VIEW flows_pod_view
CREATE MATERIALIZED VIEW IF NOT EXISTS flows_pod_view
ENGINE = SummingMergeTree
ORDER BY (
timeInserted,
Expand Down Expand Up @@ -121,7 +121,7 @@ clickhouse client -n -h 127.0.0.1 <<-EOSQL
sourcePodNamespace,
destinationPodNamespace;

CREATE MATERIALIZED VIEW flows_node_view
CREATE MATERIALIZED VIEW IF NOT EXISTS flows_node_view
ENGINE = SummingMergeTree
ORDER BY (
timeInserted,
Expand Down Expand Up @@ -163,7 +163,7 @@ clickhouse client -n -h 127.0.0.1 <<-EOSQL
sourcePodNamespace,
destinationPodNamespace;

CREATE MATERIALIZED VIEW flows_policy_view
CREATE MATERIALIZED VIEW IF NOT EXISTS flows_policy_view
ENGINE = SummingMergeTree
ORDER BY (
timeInserted,
Expand Down
24 changes: 24 additions & 0 deletions build/yamls/flow-visibility/patches/chmonitor/chMonitor.yml
@@ -0,0 +1,24 @@
- op: add
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this change independent from the rest of this PR (PersistentVolume support)? If yes, could it go into a separate PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is inspired by the PV support. This change allows us to generate a manifest without the ClickHouse monitor. We suppose this may happen when user have a large enough PV storage. In that case, the monitor won't be triggered as the throughput bottleneck would be in the flow aggregator side instead of the ClickHouse storage space side. But overall it make sense to me to put it in a separate PR.

Copy link
Contributor Author

@yanjunz97 yanjunz97 Apr 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part has been removed from this PR now. Hope it is more clear now. Thanks!

Updated, the file is added back to allow customized storage size. But the optional monitor is still not added in this PR.

path: /spec/templates/podTemplates/0/spec/containers/-
value:
name: clickhouse-monitor
image: flow-visibility-clickhouse-monitor
env:
- name: CLICKHOUSE_USERNAME
valueFrom:
secretKeyRef:
name: clickhouse-secret
key: username
- name: CLICKHOUSE_PASSWORD
valueFrom:
secretKeyRef:
name: clickhouse-secret
key: password
- name: DB_URL
value: "tcp://localhost:9000"
- name: TABLE_NAME
value: "default.flows"
- name: MV_NAMES
value: "default.flows_pod_view default.flows_node_view default.flows_policy_view"
- name: STORAGE_SIZE
value: STORAGE_SIZE_VALUE
@@ -1,6 +1,3 @@
- op: add
path: /spec/templates/podTemplates/0/spec/containers/0/imagePullPolicy
value: IfNotPresent
- op: add
path: /spec/templates/podTemplates/0/spec/containers/1/imagePullPolicy
value: IfNotPresent
28 changes: 28 additions & 0 deletions build/yamls/flow-visibility/patches/pv/createLocalPv.yml
@@ -0,0 +1,28 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: clickhouse-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: True
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: clickhouse-pv
spec:
storageClassName: clickhouse-storage
capacity:
storage: STORAGE_SIZE
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
local:
path: LOCAL_PATH
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: antrea.io/clickhouse-data-node
operator: Exists
23 changes: 23 additions & 0 deletions build/yamls/flow-visibility/patches/pv/createNfsPv.yml
@@ -0,0 +1,23 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: clickhouse-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: True
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: clickhouse-pv
spec:
storageClassName: clickhouse-storage
capacity:
storage: STORAGE_SIZE
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
nfs:
path: NFS_SERVER_PATH
server: NFS_SERVER_ADDRESS
14 changes: 14 additions & 0 deletions build/yamls/flow-visibility/patches/pv/mountPv.yml
@@ -0,0 +1,14 @@
- op: add
path: /spec/defaults/templates/dataVolumeClaimTemplate
value: clickhouse-storage-template
- op: add
path: /spec/templates/volumeClaimTemplates
value:
- name: clickhouse-storage-template
spec:
storageClassName: STORAGECLASS_NAME
accessModes:
- ReadWriteOnce
resources:
requests:
storage: STORAGE_SIZE
12 changes: 12 additions & 0 deletions build/yamls/flow-visibility/patches/ram/mountRam.yml
@@ -0,0 +1,12 @@
- op: add
path: /spec/templates/podTemplates/0/spec/volumes/-
value:
name: clickhouse-storage-volume
emptyDir:
medium: Memory
sizeLimit: STORAGE_SIZE
- op: add
path: /spec/templates/podTemplates/0/spec/containers/0/volumeMounts/-
value:
name: clickhouse-storage-volume
mountPath: /var/lib/clickhouse