After going through the installation.adoc section and having installed all the Operators, you will now deploy a Druid cluster and it’s dependencies. Afterwards you can verify that it works by ingesting example data and subsequently query it.
Three things need to be installed to have a Druid cluster:
-
A ZooKeeper instance for internal use by Druid
-
An HDFS instance to be used as a backend for deep storage
-
The Druid cluster itself
We will create them in this order, each one is created by applying a manifest file. The Operators you just installed will then create the resources according to the manifest.
Create a file named zookeeper.yaml
with the following content:
link:example$code/zookeeper.yaml[role=include]
Then create the resources by applying the manifest file
link:example$code/getting-started.sh[role=include]
Create hdfs.yaml
with the following contents:
link:example$code/hdfs.yaml[role=include]
And apply it:
link:example$code/getting-started.sh[role=include]
Next you will submit an ingestion job and then query the ingested data - either through the web interface or the API.
First, make sure that all the Pods in the StatefulSets are ready:
kubectl get statefulset
The output should show all pods ready:
NAME READY AGE simple-druid-broker-default 1/1 5m simple-druid-coordinator-default 1/1 5m simple-druid-historical-default 1/1 5m simple-druid-middlemanager-default 1/1 5m simple-druid-router-default 1/1 5m simple-hdfs-datanode-default 1/1 6m simple-hdfs-journalnode-default 1/1 6m simple-hdfs-namenode-default 2/2 6m simple-zk-server-default 3/3 7m
Then, create a port-forward for the Druid Router:
link:example$code/getting-started.sh[role=include]
Next, we will ingest some example data using the web interface. If you prefer to use the command line instead, follow the instructions in the collapsed section below.
Alternative: Using the command line
If you prefer to not use the web interface and instead interact with the API, create a file ingestion_spec.json
with the following contents:
link:example$code/ingestion_spec.json[role=include]
Submit the file with the following curl
command:
link:example$code/getting-started.sh[role=include]
Continue with the next section.
To open the web interface navigate your browser to https://localhost:8888/ to find the dashboard:
Now load the example data:
Click through all pages of the load process. You can also follow the Druid Quickstart Guide.
Once you finished the ingestion dialog you should see the ingestion overview with the job, which will eventually show SUCCESS:
Query from the user interface by navigating to the "Query" interface in the menu and query the wikipedia
table:
Alternative: Using the command line
To query from the commandline, create a file called query.json
with the query:
link:example$code/query.json[role=include]
and execute it:
link:example$code/getting-started.sh[role=include]
The result should be similar to:
link:example$code/expected_query_result.json[role=include]
Great! You’ve set up your first Druid cluster, ingested some data and queried it in the web interface!