Skip to content

Commit

Permalink
Merge branch 'master' into fix-block-perf-context-description
Browse files Browse the repository at this point in the history
  • Loading branch information
mergify[bot] committed Jan 15, 2022
2 parents f877ae9 + 1e8e215 commit db0c48d
Show file tree
Hide file tree
Showing 7 changed files with 28 additions and 16 deletions.
4 changes: 1 addition & 3 deletions docker/docs/builder/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# rebuild in #33610
# docker build -t clickhouse/docs-build .
# docker build -t clickhouse/docs-builder .
FROM ubuntu:20.04

# ARG for quick switch to a given ubuntu mirror
Expand All @@ -10,8 +10,6 @@ ENV LANG=C.UTF-8

RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install --yes --no-install-recommends \
python3-setuptools \
virtualenv \
wget \
bash \
python \
Expand Down
1 change: 1 addition & 0 deletions docker/docs/check/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
set -euo pipefail

cd $REPO_PATH/docs/tools
rm -rf venv
mkdir venv
virtualenv -p $(which python3) venv
source venv/bin/activate
Expand Down
1 change: 1 addition & 0 deletions tests/ci/docs_check.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@
cmd = f"docker run --cap-add=SYS_PTRACE --volume={repo_path}:/repo_path --volume={test_output}:/output_path {docker_image}"

run_log_path = os.path.join(test_output, 'runlog.log')
logging.info("Running command: '%s'", cmd)

with TeePopen(cmd, run_log_path) as process:
retcode = process.wait()
Expand Down
3 changes: 1 addition & 2 deletions tests/integration/test_cleanup_dir_after_bad_zk_conn/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,7 @@ def test_cleanup_dir_after_bad_zk_conn(start_cluster):
pm.drop_instance_zk_connections(node1)
time.sleep(3)
error = node1.query_and_get_error(query_create)
assert "Poco::Exception. Code: 1000" and \
"All connection tries failed while connecting to ZooKeeper" in error
time.sleep(3)
error = node1.query_and_get_error(query_create)
assert "Directory for table data data/replica/test/ already exists" not in error
node1.query_with_retry(query_create)
Expand Down
24 changes: 19 additions & 5 deletions tests/integration/test_delayed_replica_failover/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,15 @@ def test(started_cluster):
pm.drop_instance_zk_connections(node_1_2)
pm.drop_instance_zk_connections(node_2_2)

time.sleep(4) # allow pings to zookeeper to timeout (must be greater than ZK session timeout).
# allow pings to zookeeper to timeout (must be greater than ZK session timeout).
for _ in range(30):
try:
node_2_2.query("SELECT * FROM system.zookeeper where path = '/'")
time.sleep(0.5)
except:
break
else:
raise Exception("Connection with zookeeper was not lost")

# At this point all replicas are stale, but the query must still go to second replicas which are the least stale ones.
assert instance_with_dist_table.query('''
Expand All @@ -96,14 +104,20 @@ def test(started_cluster):
max_replica_delay_for_distributed_queries=1
''').strip() == '3'

# If we forbid stale replicas, the query must fail.
with pytest.raises(Exception):
print(instance_with_dist_table.query('''
# If we forbid stale replicas, the query must fail. But sometimes we must have bigger timeouts.
for _ in range(20):
try:
instance_with_dist_table.query('''
SELECT count() FROM distributed SETTINGS
load_balancing='in_order',
max_replica_delay_for_distributed_queries=1,
fallback_to_stale_replicas_for_distributed_queries=0
'''))
''')
time.sleep(0.5)
except:
break
else:
raise Exception("Didn't raise when stale replicas are not allowed")

# Now partition off the remote replica of the local shard and test that failover still works.
pm.partition_instances(node_1_1, node_1_2, port=9000)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: 'Admixer Aggregates Over 1 Billion Unique Users a Day using ClickHouse'
image: 'https://blog-images.clickhouse.com/en/2022/admixer-case-study/featured.jpg'
image: 'https://blog-images.clickhouse.com/en/2022/a-mixer-case-study/featured.jpg'
date: '2022-01-11'
author: 'Vladimir Zakrevsky'
tags: ['company']
Expand Down Expand Up @@ -44,7 +44,7 @@ Thus we needed to:
* Be able to scale the data warehouse as the number of requests grew;
* Have full control over our costs.

![Profile Report](https://blog-images.clickhouse.com/en/2022/admixer-case-study/profile-report.png)
![Profile Report](https://blog-images.clickhouse.com/en/2022/a-mixer-case-study/profile-report.png)

This image shows the Profile Report. Any Ad Campaign in Admixer is split by Line Items (Profiles). It is possible to overview detailed reports by each Profile including Date-Time Statistics, Geo, Domans, SSPs. This report is also updated in real time.

Expand All @@ -69,11 +69,11 @@ ClickHouse helps to cope with the challenges above and provides the following be

Our architecture changed from 2016 to 2020. There are two diagrams below: the state we started and the state we came to.

![Architecture 2016](https://blog-images.clickhouse.com/en/2022/admixer-case-study/architecture-2016.png)
![Architecture 2016](https://blog-images.clickhouse.com/en/2022/a-mixer-case-study/architecture-2016.png)

_Architecture 2016_

![Architecture 2020](https://blog-images.clickhouse.com/en/2022/admixer-case-study/architecture-2020.png)
![Architecture 2020](https://blog-images.clickhouse.com/en/2022/a-mixer-case-study/architecture-2020.png)

_Architecture 2020_

Expand Down Expand Up @@ -131,5 +131,3 @@ Today, the company has over 100 supply and demand partners, 3,000+ customers, an

For more information please visit:
[https://admixer.com/](https://admixer.com/)


1 change: 1 addition & 0 deletions website/blog/en/redirects.txt
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,4 @@ clickhouse-at-percona-live-2019.md 2019/clickhouse-at-percona-live-2019.md
clickhouse-meetup-in-madrid-on-april-2-2019.md 2019/clickhouse-meetup-in-madrid-on-april-2-2019.md
clickhouse-meetup-in-beijing-on-june-8-2019.md 2019/clickhouse-meetup-in-beijing-on-june-8-2019.md
five-methods-for-database-obfuscation.md 2020/five-methods-for-database-obfuscation.md
2022/admixer-aggregates-over-1-billion-unique-users-a-day-using-clickhouse.md 2022/a-mixer-aggregates-over-1-billion-unique-users-a-day-using-clickhouse.md

0 comments on commit db0c48d

Please sign in to comment.