Real²time Exploratory Analytics on Large Datasets
Java TeX JavaScript HTML R CSS
Pull request Compare This branch is 7 commits behind druid-io:master.
Latest commit b1e3fe7 Jul 27, 2016 @drcrallen drcrallen committed with nishantmonu51 More logging around how the coordinator balancer is happening (#3279)
* More logging around how the coordinator balancer is happening

* Address comments

* Address code review comments and add actual logging
Failed to load latest commit information.
api Add timestampSpec to metadata.drd and SegmentMetadataQuery (#3227) Jul 25, 2016
aws-common Update master version to 0.9.2-SNAPSHOT. (#3133) Jun 13, 2016
benchmarks Support filtering on long columns (including __time) (#3180) Jul 20, 2016
common Reference counting, better error handling for resources in groupBy v2. ( Jul 27, 2016
distribution Distribution: pull-deps compiled hadoop version (#3044) Jul 18, 2016
docs More logging around how the coordinator balancer is happening (#3279) Jul 27, 2016
examples Quickstart: Use hadoopyString for batch indexing instead of string. (#… Jul 19, 2016
extensions-contrib Hadoop InputRowParser for Orc file (#3019) Jul 26, 2016
extensions-core Be more respectful of maxRowsInMemory. (#3284) Jul 26, 2016
indexing-hadoop Add timestampSpec to metadata.drd and SegmentMetadataQuery (#3227) Jul 25, 2016
indexing-service change expected response from ACCEPTED to OK (#3280) Jul 23, 2016
integration-tests fix segmentMetadata query results in integration tests (#3288) Jul 26, 2016
processing Reference counting, better error handling for resources in groupBy v2. ( Jul 26, 2016
publications Support min/max values for metadata query (#2208) Feb 12, 2016
server More logging around how the coordinator balancer is happening (#3279) Jul 27, 2016
services add comment for default hadoop coordinates (#3257) Jul 18, 2016
.gitignore move distribution artifacts to distribution/target Oct 30, 2015
.travis.yml Disable cobertura travis portion (#3122) Jun 13, 2016
CONTRIBUTING.md Add doc link to eclipse formatting settings as well (#3131) Jun 24, 2016
DruidCorporateCLA.pdf fix CLA email / mailing address Apr 17, 2014
DruidIndividualCLA.pdf fix CLA email / mailing address Apr 17, 2014
LICENSE Clean up README and license Feb 18, 2015
NOTICE Two-stage filtering (#3018) Jun 22, 2016
README.md update readme (#2830) Apr 13, 2016
druid_intellij_formatting.xml Make formatting IntelliJ 2016 friendly (#2978) May 18, 2016
eclipse.importorder Merge pull request #2905 from javasoze/eclipse_formatting Apr 29, 2016
eclipse_formatting.xml Merge pull request #2905 from javasoze/eclipse_formatting Apr 30, 2016
pom.xml Hadoop InputRowParser for Orc file (#3019) Jul 26, 2016
upload.sh upload.sh: Use awscli if s3cmd is not available. (#3114) Jun 8, 2016

README.md

Build Status Coverage Status

Druid

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments.

Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

Druid can load both streaming and batch data and integrates with Samza, Kafka, Storm, Spark, and Hadoop.

License

Apache License, Version 2.0

More Information

More information about Druid can be found on http://www.druid.io.

Documentation

You can find the documentation for the latest Druid release on the project website.

If you would like to contribute documentation, please do so under /docs/content in this repository and submit a pull request.

Getting Started

You can get started with Druid with our quickstart.

Reporting Issues

If you find any bugs, please file a GitHub issue.

Community

Community support is available on the druid-user mailing list(druid-user@googlegroups.com).

Development discussions occur on the druid-development list(druid-development@googlegroups.com).

We also have a couple people hanging out on IRC in #druid-dev on irc.freenode.net.

Contributing

Please follow the guidelines listed here.