Skip to content

Hadoop infrastructure for Druid Batch ingestion using docker-compose

Notifications You must be signed in to change notification settings

zeroooooowest/docker-hadoop-druid-metabase

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

docker-hadoop-druid-metabase

This repo is for constructing an infrastructure environment using docker-compose, include Hadoop(Pseudo-Distributed Mode), Druid(Single-Server), Metabase.

PORT

Server Port
HDFS Web 50070
Yarn ResourceManager Web 8088
MapReduce HistoryServer 19888
Druid QueryServer 8888
Metabase 3000

set up

mkdir -p /tmp/shared

Create a folder to share volumes between docker containers. And if you have given data, copy to that folder.

start

docker-compose up --build &

The minimum specifications are 4 vCPU cores and 16GiB RAMs.

If you want to execute at a lower specification, modify the micro-quickstart to nano-quickstart in Druid Dockerile and bootstrap.sh. However, nano-quickstart has very little Heap space and cannot index large file.

clean up

docker-compose down

About

Hadoop infrastructure for Druid Batch ingestion using docker-compose

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published