Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

combine all config.test*edn files #223

Merged
merged 1 commit into from
May 11, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
7 changes: 5 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,9 @@ setup:
sleep 10
docker exec ziggurat_kafka /opt/bitnami/kafka/bin/kafka-topics.sh --create --topic $(topic) --partitions 3 --replication-factor 1 --zookeeper ziggurat_zookeeper
docker exec ziggurat_kafka /opt/bitnami/kafka/bin/kafka-topics.sh --create --topic $(another_test_topic) --partitions 3 --replication-factor 1 --zookeeper ziggurat_zookeeper

test: setup
macalimlim marked this conversation as resolved.
Show resolved Hide resolved
ZIGGURAT_STREAM_ROUTER_DEFAULT_ORIGIN_TOPIC=$(topic) lein test
TESTING_TYPE=local lein test
docker-compose down

setup-cluster:
Expand All @@ -24,10 +25,12 @@ setup-cluster:
# Sleeping for 30s to allow the cluster to come up
docker exec ziggurat_kafka1_1 kafka-topics --create --topic $(topic) --partitions 3 --replication-factor 3 --if-not-exists --zookeeper ziggurat_zookeeper_1
docker exec ziggurat_kafka1_1 kafka-topics --create --topic $(another_test_topic) --partitions 3 --replication-factor 3 --if-not-exists --zookeeper ziggurat_zookeeper_1

test-cluster: setup-cluster
ZIGGURAT_STREAM_ROUTER_DEFAULT_ORIGIN_TOPIC=$(topic) lein test-cluster
TESTING_TYPE=cluster lein test
docker-compose -f docker-compose-cluster.yml down
rm -rf /tmp/ziggurat_kafka_cluster_data

coverage: setup
lein code-coverage
docker-compose down
8 changes: 2 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -488,7 +488,7 @@ All Ziggurat configs should be in your `clonfig` `config.edn` under the `:ziggur
:key-serializer "org.apache.kafka.common.serialization.StringSerializer"}}}
:enable-streams-uncaught-exception-handling [true :bool]
:default-api-timeout-ms-config [600000 :int]
:datadog {:host "localhost"
:statsd {:host "localhost"
:port [8125 :int]
:enabled [false :bool]}
:statsd {:host "localhost"
Expand Down Expand Up @@ -543,8 +543,7 @@ All Ziggurat configs should be in your `clonfig` `config.edn` under the `:ziggur
- max.in.flight.requests.per.connection - The maximum number of unacknowledged requests the client will send on a single connection before blocking.
- enable.idempotence - When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream.

- datadog - The statsd host and port that metrics should be sent to, although the key name is datadog, it supports statsd as well to send metrics.
- statsd - Same as datadog but with a more appropriate name, the :datadog key will be deprecated in the future.
- statsd - Formerly known as datadog, The statsd host and port that metrics should be sent to.
- sentry - Whenever a :failure keyword is returned from the mapper-function or an exception is raised while executing the mapper-function, an event is sent to sentry. You can skip this flow by disabling it.
- rabbit-mq-connection - The details required to make a connection to rabbitmq. We use rabbitmq for the retry mechanism.
- rabbit-mq - The queues that are part of the retry mechanism
Expand Down Expand Up @@ -596,9 +595,6 @@ and different timeout values.

## Deprecation Notice

- Please note that the :datadog key inside the config file will be removed (sometime in the future) in favor of :statsd. Both contents are same though, it's just the key name was changed.
The reason for this is to avoid confusion with our users. We would still have backward compatibility for the :datadog key.

## Contribution

- For dev setup and contributions please refer to CONTRIBUTING.md
Expand Down
1 change: 0 additions & 1 deletion bin/run_cluster_tests_in_ci.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,4 @@
set -ex

lein clean
mv -fv resources/config.test.{cluster.ci.edn,cluster.edn}
sudo make test-cluster
7 changes: 0 additions & 7 deletions bin/run_tests_in_ci.sh

This file was deleted.

4 changes: 1 addition & 3 deletions project.clj
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,7 @@
:plugins [[lein-shell "0.5.0"]]
:pedantic? :warn
:java-source-paths ["src/com"]
:aliases {"code-coverage" ["with-profile" "test" "cloverage" "--output" "coverage" "--lcov"]
"test-cluster" ["shell" "lein" "test"]}
:shell {:env {"TEST_CONFIG_FILE" "config.test.cluster.edn"}}
:aliases {"code-coverage" ["with-profile" "test" "cloverage" "--output" "coverage" "--lcov"]}
:aot [ziggurat.kafka-consumer.invalid-return-type-exception]
:profiles {:uberjar {:aot :all
:global-vars {*warn-on-reflection* true}
Expand Down
87 changes: 0 additions & 87 deletions resources/config.test.ci.edn

This file was deleted.