Skip to content

Releases: apache/druid

druid-0.9.2

01 Dec 21:43
Compare
Choose a tag to compare

Druid 0.9.2 contains hundreds of performance improvements, stability improvements, and bug fixes from over 30 contributors. Major new features include a new groupBy engine, ability to disable rollup at ingestion time, ability to filter on longs, new encoding options for long-typed columns, performance improvements for HyperUnique and DataSketches, a query cache implementation based on Caffeine, a new lookup extension exposing fine grained caching strategies, support for reading ORC files, and new aggregators for variance and standard deviation.

The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aclosed%20milestone%3A0.9.2

Documentation for this release is here: http://druid.io/docs/0.9.2/

Highlights

New groupBy engine

Druid now includes a new groupBy engine, rewritten from the ground up for better performance and memory management. Benchmarks show a 2–5x performance boost on our test datasets. The new engine also supports strict limits on memory usage and the option to spill to disk when memory is exhausted, avoiding result row count limitations and potential OOMEs generated by the previous engine.

The new engine is off by default, but you can enable it through configuration or query context parameters. We intend to enable it by default in a future version of Druid.

See "implementation details" on http://druid.io/docs/0.9.2/querying/groupbyquery.html#implementation-details for documentation and configuration.

Added in #2998 by @gianm.

Ability to disable rollup

Since its inception, Druid has had a concept of "dimensions" and "metrics" that applied both at ingestion time and at query time. Druid is unique in that it is one of the only databases that supports aggregation at data loading time, which we call "rollup". But, for some use cases, ingestion-time rollup is not desired, and it's better to load the original data as-is. With rollup disabled, one row in Druid will be created for each input row.

Query-time aggregation is, of course, still supported through the groupBy, topN, and timeseries queries.

See the "rollup" flag on http://druid.io/docs/0.9.2/ingestion/index.html for documentation. By default, rollup remains enabled.

Added in #3020 by @kaijianding.

Ability to filter on longs

Druid now supports sophisticated filtering on integer-typed columns, including long metrics and the special __time column. This opens up a number of new capabilities:

Druid does not yet support grouping on longs. We intend to add this capability in a future release.

Added in #3180 by @jon-wei.

New long encodings

Until now, all integer-typed columns in Druid, including long metrics and the special __time column, were stored as 64-bit longs optionally compressed in blocks with LZ4. Druid 0.9.2 adds new encoding options which, in many cases, can reduce file sizes and improve performance:

  • Long encoding option "auto", which potentially uses table or delta encoding to use fewer than 64 bits per row. The "longs" encoding option is the default behavior, which always uses 64 bits.
  • Compression option "none", which is like the old "uncompressed" option, except it offers a speedup by bypassing block copying.

The default remains "longs" encoding + "lz4" compression. In our testing, two options that often yield useful benefits are "auto" + "lz4" (generally smaller than longs + lz4) and "auto" + "none" (generally faster than longs + lz4, file size impact varies). See the PR for full test results.

See "metricCompression" and "longEncoding" on http://druid.io/docs/0.9.2/ingestion/batch-ingestion.html for documentation.

Added in #3148 by @acslk.

Sketch performance improvements

  • DataSketches speedups of up to 80% from #3471.
  • HyperUnique speedups of 19–30% from #3314, used for "hyperUnique" and "cardinality" aggregators.

New extensions

And much more!

The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aclosed%20milestone%3A0.9.2

Updating from 0.9.1.1

Rolling updates

The standard Druid update process described by http://druid.io/docs/0.9.2/operations/rolling-updates.html should be followed for rolling updates.

Query time lookups

The druid-namespace-lookup extension, which was deprecated in 0.9.1 in favor of druid-lookups-cached-global, has been removed in 0.9.2. If you are using druid-namespace-lookup, migrate to druid-lookups-cached-global before upgrading to 0.9.2. See our migration guide for details: http://druid.io/docs/0.9.1.1/development/extensions-core/namespaced-lookup.html#transitioning-to-lookups-cached-global

Other notes

Please note the following changes:

  • Druid now ships Guice 4.1.0 rather than 4.0-beta (#3222). This conflicts with the version shipped in some Hadoop distributions, so for Hadoop indexing you may need to adjust your mapreduce.job.classloader or mapreduce.job.user.classpath.first options. In testing we have found this to be an effective workaround. See http://druid.io/docs/0.9.2/operations/other-hadoop.html for details.
  • If you are using Roaring bitmaps, note that compressRunOnSerialization now defaults to true. As a result, segments written will not be readable by Druid 0.8.1 or earlier. If you need segments written by Druid 0.9.2 to be readable by 0.8.1, and you are using Roaring bitmaps, you must set compressRunOnSerialization = false. By default, bitmaps are Concise, not Roaring, so this point will not apply to you unless you overrode that. See #3228 for details.
  • If you use the new long encoding or compression options, segments written by Druid will not be readable by any version older than 0.9.2. If you don't use the new options, segments will remain backwards compatible.
  • If you are using the experimental Kafka indexing service, there is a known issue that may cause task supervision to hang when it tries to stop all running tasks simultaneously during the upgrade process. To prevent this from happening, you can shutdown all supervisors and wait for the indexing tasks to complete before updating your overlord. Alternatively, you can set chatThreads in the supervisor tuning configuration to a value greater than the number of running tasks as a workaround.

Credits

Thanks to everyone who contributed to this release!

@acslk
@AlexanderSaydakov
@ashishawasthi
@b-slim
@chtefi
@dclim
@drcrallen
@du00cs
@ecesena
@erikdubbelboer
@fjy
@Fokko
@gianm
@giaosudau
@guobingkun
@gvsmirnov
@hamlet-lee
@himanshug
@HyukjinKwon
@jaehc
@jianran
@jon-wei
@kaijianding
@leventov
@linbojin
@michaelschiff
@navis
@nishantmonu51
@pjain1
@rajk-tetration
@SainathB
@sirpkt
@vogievetsky
@xvrl
@yuppie-flu

druid-0.9.1.1

29 Jun 18:58
Compare
Choose a tag to compare

Druid 0.9.1.1 contains only one change since Druid 0.9.1, #3204, which addresses a bug with the Coordinator web console. The full list of changes for the Druid 0.9.1 line is here: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed

Updating from 0.9.0

Query time lookups

Query time lookup (QTL) functionality has been substantially reworked in this release. Most users will need to update their configurations and queries.

The druid-namespace-lookup extension is now deprecated, and will be removed in a future version of Druid. Users should migrate to the new druid-lookups-cached-global extension. Both extensions can be loaded simultaneously to simplify migration. For details about migrating, see Transitioning to lookups-cached-global in the documentation.

Other notes

Aside from the QTL changes, please note the following changes:

  • The default value for maxRowsInMemory has been set to 75,000 across the board for all forms of ingestion. This is in line with previous defaults for Hadoop tasks and Tranquility-based ingestion. If you were creating realtime index tasks directly (without Tranquility) then this is lower than the previous default of 500,000.
  • The /druid/coordinator/v1/datasources/{dataSourceName}?kill=true&interval={myISO8601Interval} REST endpoint is now deprecated. The new /druid/coordinator/v1/datasources/{dataSourceName}/intervals/{interval}?kill=true REST endpoint can be used instead.
  • The druid.indexer.runner.separateIngestionEndpoint property is now deprecated. If you were using this functionality to isolate event-push requests and query serving requests for realtime tasks, you can accomplish something similar with druid.indexer.server.maxChatRequests.
  • For developers of Druid extensions, note that the QueryGranularity constants (ALL, NONE, etc) have been moved to io.druid.granularity.QueryGranularities in #2980. Query syntax is not affected.

Rolling updates

The standard Druid update process described by http://druid.io/docs/0.9.1.1/operations/rolling-updates.html should be followed for rolling updates.

Kafka Supervisor

Druid 0.9.1 is the first version to include the experimental Kafka indexing service, utilizing a new Kafka-type indexing task and a supervisor that runs within the Druid overlord. The Kafka indexing service provides an exactly-once ingestion guarantee and does not have the restriction of events requiring timestamps which fall within a window period. More details about this feature are available in the documentation: http://druid.io/docs/0.9.1.1/development/extensions-core/kafka-ingestion.html.

Note: The Kafka indexing service uses the Java Kafka consumer that was introduced in Kafka 0.9. As there were protocol changes made in this version, Kafka 0.9 consumers are not compatible with older brokers and you will need to ensure that your Kafka brokers are version 0.9 or better. Details on upgrading to the latest version of Kafka can be found here: http://kafka.apache.org/documentation.html#upgrade

New Features

#2656 Supervisor for KafkaIndexTask
#2602 implement special distinctcount
#2220 Appenderators, DataSource metadata, KafkaIndexTask
#2424 Enabling datasource level authorization in Druid
#2410 statsd-emitter
#1576 [QTL] Query time lookup cluster wide config

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed+label%3AFeature

Improvements

#2972 Improved Segment Distrubution (new cost function)
#2931 Optimize filter for timeseries, search, and select queries
#2753 More consistent empty-set filtering behavior on multi-value columns
#2727 BoundFilter optimizations, and related interface changes.
#2711 All Filters should work with FilteredAggregators
#2690 Allow filters to use extraction functions
#2577 Implement native in filter

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed+label%3AImprovement

Bug Fixes

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed+label%3ABug

Documentation

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed+label%3ADocumentation

Thanks to everyone who contributed to this release!
@acslk
@b-slim
@binlijin
@bjozet
@dclim
@drcrallen
@du00cs
@erikdubbelboer
@fjy
@gaodayue
@gianm
@guobingkun
@harshjain2
@himanshug
@jaehc
@javasoze
@jisookim0513
@jon-wei
@JonStrabala
@kilida
@lizhanhui
@michaelschiff
@mrijke
@navis
@nishantmonu51
@pdeva
@pjain1
@rasahner
@sascha-coenen
@se7entyse7en
@shekhargulati
@sirpkt
@skilledmonster
@spektom
@xvrl
@yuppie-flu

druid-0.9.1

28 Jun 23:23
Compare
Choose a tag to compare

Druid 0.9.1 contains hundreds of performance improvements, stability improvements, and bug fixes from over 30 contributors. Major new features include an experimental Kafka Supervisor to support exactly-once consumption from Apache Kafka, support for cluster-wide query-time lookups (QTL), and an improved segment balancing algorithm.

The full list of changes is here: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed

Updating from 0.9.0

Query time lookups

Query time lookup (QTL) functionality has been substantially reworked in this release. Most users will need to update their configurations and queries.

The druid-namespace-lookup extension is now deprecated, and will be removed in a future version of Druid. Users should migrate to the new druid-lookups-cached-global extension. Both extensions can be loaded simultaneously to simplify migration. For details about migrating, see Transitioning to lookups-cached-global in the documentation.

Other notes

Aside from the QTL changes, please note the following changes:

  • The default value for maxRowsInMemory has been set to 75,000 across the board for all forms of ingestion. This is in line with previous defaults for Hadoop tasks and Tranquility-based ingestion. If you were creating realtime index tasks directly (without Tranquility) then this is lower than the previous default of 500,000.
  • The /druid/coordinator/v1/datasources/{dataSourceName}?kill=true&interval={myISO8601Interval} REST endpoint is now deprecated. The new /druid/coordinator/v1/datasources/{dataSourceName}/intervals/{interval}?kill=true REST endpoint can be used instead.
  • The druid.indexer.runner.separateIngestionEndpoint property is now deprecated. If you were using this functionality to isolate event-push requests and query serving requests for realtime tasks, you can accomplish something similar with druid.indexer.server.maxChatRequests.
  • For developers of Druid extensions, note that the QueryGranularity constants (ALL, NONE, etc) have been moved to io.druid.granularity.QueryGranularities in #2980. Query syntax is not affected.

Rolling updates

The standard Druid update process described by http://druid.io/docs/0.9.1/operations/rolling-updates.html should be followed for rolling updates.

Kafka Supervisor

Druid 0.9.1 is the first version to include the experimental Kafka indexing service, utilizing a new Kafka-type indexing task and a supervisor that runs within the Druid overlord. The Kafka indexing service provides an exactly-once ingestion guarantee and does not have the restriction of events requiring timestamps which fall within a window period. More details about this feature are available in the documentation: http://druid.io/docs/0.9.1/development/extensions-core/kafka-ingestion.html.

Note: The Kafka indexing service uses the Java Kafka consumer that was introduced in Kafka 0.9. As there were protocol changes made in this version, Kafka 0.9 consumers are not compatible with older brokers and you will need to ensure that your Kafka brokers are version 0.9 or better. Details on upgrading to the latest version of Kafka can be found here: http://kafka.apache.org/documentation.html#upgrade

New Features

#2656 Supervisor for KafkaIndexTask
#2602 implement special distinctcount
#2220 Appenderators, DataSource metadata, KafkaIndexTask
#2424 Enabling datasource level authorization in Druid
#2410 statsd-emitter
#1576 [QTL] Query time lookup cluster wide config

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed+label%3AFeature

Improvements

#2972 Improved Segment Distrubution (new cost function)
#2931 Optimize filter for timeseries, search, and select queries
#2753 More consistent empty-set filtering behavior on multi-value columns
#2727 BoundFilter optimizations, and related interface changes.
#2711 All Filters should work with FilteredAggregators
#2690 Allow filters to use extraction functions
#2577 Implement native in filter

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed+label%3AImprovement

Bug Fixes

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed+label%3ABug

Documentation

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.1+is%3Aclosed+label%3ADocumentation

Thanks to everyone who contributed to this release!
@acslk
@b-slim
@binlijin
@bjozet
@dclim
@drcrallen
@du00cs
@erikdubbelboer
@fjy
@gaodayue
@gianm
@guobingkun
@harshjain2
@himanshug
@jaehc
@javasoze
@jisookim0513
@jon-wei
@JonStrabala
@kilida
@lizhanhui
@michaelschiff
@mrijke
@navis
@nishantmonu51
@pdeva
@pjain1
@rasahner
@sascha-coenen
@se7entyse7en
@shekhargulati
@sirpkt
@skilledmonster
@spektom
@xvrl
@yuppie-flu

Druid 0.9.0

13 Apr 18:58
Compare
Choose a tag to compare

Druid 0.9.0 introduces an update to the extension system that requires configuration changes. There were additionally over 400 pull requests from 0.8.3 to 0.9.0. Below we highlight the more important changes in this patch.

Full list of changes is here: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.0+is%3Aclosed

Updating from 0.8.x

Extensions

In Druid 0.9, we have refactored the extension loading mechanism. The main reason behind this change is to make Druid load extensions from the local file system without having to download stuff from the internet at runtime.

To learn all about the new extension loading mechanism, see Include extensions and Include Hadoop Dependencies. If you are impatient, here is the summary.

The following properties have been deprecated:
druid.extensions.coordinates
druid.extensions.remoteRepositories
druid.extensions.localRepository
druid.extensions.defaultVersion

Instead, specify druid.extensions.loadList, druid.extensions.directory and druid.extensions.hadoopDependenciesDir.

druid.extensions.loadList specifies the list of extensions that will be loaded by Druid at runtime. An example would be druid.extensions.loadList=["druid-datasketches", "mysql-metadata-storage"].

druid.extensions.directory specifies the directory where all the extensions live. An example would be druid.extensions.directory=/xxx/extensions.

Note that mysql-metadata-storage extension is not packaged in druid distribution due to license issue. You will have to manually download it from druid.io, decompress and then put in the extensions directory specified.

druid.extensions.hadoopDependenciesDir specifies the directory where all the Hadoop dependencies live. An example would be druid.extensions.hadoopDependenciesDir=/xxx/hadoop-dependencies. Note: We didn't change the way of specifying which Hadoop version to use. So you just need to make sure the Hadoop you want to use exists underneath /xxx/hadoop-dependencies.

You might now wonder if you have to manually put extensions inside /xxx/extensions and /xxx/hadoop-dependencies. The answer is no, we already have created them for you. Download the latest Druid tarball at http://druid.io/downloads.html. Unpack it and you will see extensions and hadoop-dependencies folders there. Simply copy them to /xxx/extensions and /xxx/hadoop-dependencies respectively, now you are all set!

If the extension or the Hadoop dependency you want to load is not included in the core extension, you can use pull-deps to download it to your extension directory.

If you want to load your own extension, you can first do mvn install to install it into local repository, and then use pull-deps to download it to your extension directory.

Please feel free to leave any questions regarding the migration.

Extensions have now also been refactored in core and contrib extensions. Core extensions will be maintained by Druid committers and are packaged as part of the download tarball. Contrib extensions are community maintained and can be installed as needed. For more information, please see here.

Ordering of Dimensions

Until Druid 0.8.x the order of dimensions given at indexing time did not affect the way data gets indexed. Rows would be ordered first by timestamp, then by dimension values, in lexicographical order of dimension names.

As of Druid 0.9.0, Druid respects the given dimension order given and will order rows first by timestamp, then by dimension values, in the given dimension order.

This means segments may now vary in size depending on the order in which dimensions are given. Specifying a dimension with many unique values first, may result in worse compression than specifying dimensions with repeating values first.

Min/Max Aggregators no longer supported, use doubleMin/doubleMax instead

As indicated in the 0.8.3 release notes, min/max aggregators have been removed in favor of doubleMin, doubleMax, longMin, and longMax aggregators.

If you have any issues starting up because of this, please see #2749

Configuration changes

druid.indexer.task.baseDir and druid.indexer.task.baseTaskDir now default to using the standard Java temporary directory specified by java.io.tmpdir system property, instead of /tmp,

Other issues to be aware of: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.0+is%3Aclosed+label%3A%22Release+Notes%22

and

https://github.com/druid-io/druid/issues?q=milestone%3A0.9.0+is%3Aclosed+label%3AIncompatible

New Features

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.0+is%3Aclosed+label%3AFeature

#1719 Add Rackspace Cloud Files Deep Storage Extension
#1858 Support avro ingestion for realtime & hadoop batch indexing
#1873 add ability to express CONCAT as an extractionFn
#1921 Add docs and benchmark for JSON flattening parser
#1936 adding Upper/Lower Bound Filter
#1978 Graphite emitter
#1986 Preserve dimension order across indexes during ingestion
#2008 Regex search query
#2014 Support descending time ordering for time series query
#2043 Add dimension selector support for groupby/having filter
#2076 adding lower and upper extraction fn
#2209 support cascade execution of extraction filters in extraction dimension spec
#2221 Allow change minTopNThreshold per topN query
#2264 Adding custom mapper for json processing exception
#2271 time-descending result of select queries
#2258 acl for zookeeper is added

Improvements

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.0+is%3Aclosed+label%3AImprovement

#984 Use thread priorities. (aka set nice values for background-like tasks)
#1638 Remove Maven client at runtime + Provide a way to load Druid extensions through local file system
#1728 Store AggregatorFactory[] in segment metadata
#1988 support multiple intervals in dataSource inputSpec
#2006 Preserve dimension order across indexes during ingestion
#2047 optimize InputRowSerde
#2075 Configurable value replacement on match failure for RegexExtractionFn
#2079 reduce bytearray copy to minimal optimize VSizeIndexedWriter
#2084 minor optimize IndexMerger's MMappedIndexRowIterable
#2094 Simplifying dimension merging
#2107 More efficient SegmentMetadataQuery
#2111 optimize create inverted indexes
#2138 build v9 directly
#2228 Improve heap usage for IncrementalIndex
#2261 Prioritize loading of segments based on segment interval
#2306 More specific null/empty str handling in IndexMerger

Bug Fixes

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.0+is%3Aclosed+label%3ABug

Documentation

Full list: https://github.com/druid-io/druid/issues?q=milestone%3A0.9.0+is%3Aclosed+label%3ADocumentation

#2100 doc update to make it easy to find how to do re-indexing or delta ingestion
#2186 Add intro developer docs
#2279 Some more multitenancy docs
#2364 Add more docs around timezone handling
#2216 Completely rework the Druid getting started process

Thanks to everyone who contributed to this patch!
@fjy
@xvrl
@drcrallen
@pjain1
@chtefi
@liubin
@salsakran
@jaebinyo
@erikdubbelboer
@gianm
@bjozet
@navis
@AlexanderSaydakov
@himanshug
@guobingkun
@abbondanza
@binlijin
@rasahner
@jon-wei
@CHOIJAEHONG1
@loganlinn
@michaelschiff
@himank
@nishantmonu51
@sirpkt
@duilio
@pdeva
@KurtYoung
@mangesh-pardeshi
@dclim
@desaianuj
@stevemns
@b-slim
@cheddar
@jkukul
@AdrieanKhisbe
@liuqiyun
@codingwhatever
@clintropolis
@zhxiaogg
@rohitkochar
@itsmee
@Angelmmiguel
@Noddi
@se7entyse7en
@zhaown
@genevien

Druid 0.8.3 - Stable

26 Jan 23:51
Compare
Choose a tag to compare

Updating from 0.8.x

  • You must set druid.selectors.coordinator.serviceName to your Coordinator's druid.service value (defaults to druid/coordinator) in common.runtime.properties of all nodes. Realtime handoff will only work if this config is properly set. (See #2015)
  • Instead of the normal rolling update procedure, for this release you should update your Coordinator nodes before updating the overlord. (See #2015)
  • Min/max aggregators are now deprecated and will be removed in Druid 0.9.0. Please use longMin, longMax, doubleMin, or doubleMax aggregators as appropriate.

New Features

Improvements

  • #1770 Add segment merge time as a metric
  • #1791 EventReceiverFirehoseMonitor
  • #1824 Add hashCode and equals to UniformGranularitySpec
  • #1889 update server metrics and emitter version
  • #1920 Update curator to 2.9.1
  • #1929 separate ingestion and query thread pool
  • #1960 optimize index merge
  • #1967 Add datasource and taskId to metrics emitted by peons
  • #1973 CacheMonitor - make cache injection optional
  • #2015 Remove ServerView from RealtimeIndexTasks and use coordinator http endpoint for handoffs
  • #2045 Update mmx emitter to 0.3.6
  • #2145 druid.indexer.task.restoreTasksOnRestart configuration

Bug Fixes

  • #1387 Add special handler to allow logger messages during shutdown
  • #1799 Support multiple outer aggregators of same type
  • #1815 Fix Race in jar upload during hadoop indexing
  • #1842 Do not pass druid.indexer.runner.javaOpts to Peon as a property
  • #1867 fixing hadoop test scope dependencies in indexing-hadoop
  • #1888 forward cancellation request to all brokers, fixes #1802
  • #1917 RemoteTaskActionClient: Fix statusCode check.
  • #1932 DataSchema: Exclude metric names from dimension list.
  • #1935 ForkingTaskRunner: Log without buffering.
  • #1940 Move Jackson Guice adapters into io.druid
  • #1954 EC2 autoscaler: avoid hitting aws filter limits
  • #1985 Change LookupExtractionFn cache key to be unique
  • #2036 Disable javadoc linting
  • #1973 Make cache injection optional
  • #2141 Fix some problems with restoring
  • #2227 Update bytebuffer-collections to 0.2.4 (upstream bugfixes in roaring bitmaps)
  • #2240 Fix loadRule when one of the tiers had no available servers
  • #2207 Fix bug for thetaSketch metric not working with select queries
  • #2266 Fix loss in segment announcements when segments do not fit in zNode
  • #2189 add ChatHandlerServerModule to realtime example
  • #2338 Fix tutorial so indexing service can start up

Documentation

  • #1832 add examples for duration and period granularities
  • #1843 "druid.manager.segment" should be "druid.manager.segments
  • #1854 Fix documentation about lookup
  • #1900 fix doc - correct default value for maxRowsInMemory

Thanks to all the contributors to this release!

@b-slim
@binlijin
@dclim
@drcrallen
@fjy
@gianm
@guobingkun
@himanshug
@nishantmonu51
@pjain1
@xvrl

Druid 0.8.2 - Stable

18 Nov 17:17
Compare
Choose a tag to compare

Updating from 0.8.1

If you are using union queries, please make sure to update broker nodes prior to updating any historical nodes, realtime nodes, or indexing service.

Otherwise, you can follow standard rolling update procedures.

New Features

  • #1744 Memcached connection pooling
  • #1753 Allow SegmentMetadataQuery to skip cardinality and size calculations
  • #1609 Experimental kafa simple consumer based firehose
  • #1800 Experimental Hybrid L1/L2 cache

Improvements

  • #1821 cache max data timestamp in QueryableIndexStorageAdapter
  • #1765 Add CPUTimeMetricQueryRunner to ClientQuerySegmentWalker
  • #1776 Modified the Twitter firehose to process more properties
  • #1748 Allow ForkingTaskRunner javaOpts to have quoted arguments which contain spaces
  • #1759 better faster smaller roaring bitmaps
  • #1755 update druid-api for timestamp parsing speedup
  • #1756 improving msging when indexing service is not found
  • #1739 Allow SegmentAnalyzer to read columns from StorageAdapter, allow SegmentMetadataQuery to query IncrementalIndexSegments on realtime node
  • #1732 Add support for a configurable default segment history period for segmentMetadata queries and GET /datasources/ lookups
  • #1695 Allow writing InputRowParser extensions that use hadoop/any libraries
  • #1688 More memcached metrics
  • #1712 Add dimension extraction functionality to SearchQuery
  • #1696 Add CPU time to metrics for segment scanning.
  • #1718 Adds task duration to indexer console for completed tasks.
  • #1725 Don't check for sortedness if we already know GenericIndexedWriter isn't sorted
  • #1699 composing emitter module to use multiple emitters together
  • #1639 New plumber
  • #1604 Allow task to override ForkingTaskRunner tunings and jvm settings
  • #1542 add endpoint to fetch rule history for all datasources
  • #1682 Support parsing of BytesWritable strings in HadoopDruidIndexerMapper
  • #1622 Support for JSON Smile format for EventReceiverFirehoseFactory
  • #1654 Add ability to provide taskResource for IndexTask.

Bug Fixes

  • #1868 Removing parent paths causes watchers of the "announcements" path to get stuck
  • #1855 fix [GreaterThan,LessThan,Equals] HavingSpecs
  • #1862 Add timeout to shutdown request to middle manager for indexing service
  • #1822 support multiple non-consecutive intervals in outer query of nested group-by
  • #1811 Server discovery selector ipv6 friendly
  • #1823 For dataSource inputSpec in hadoop batch ingestion, use configured query granularity for reading existing segments instead of NONE
  • #1818 Add hashCode and equals to stock lookups
  • #1812 Bump server-metrics to 0.2.5 to catch a few fixes.
  • #1806 Fix index exceeded msg to give maxRowCount as well
  • #1801 Fix ClientInfoResource
  • #1795 Try and make AnnouncerTest a bit more predictable
  • #1797 ingest segment firehose ut
  • #1798 Update httpcomponents and aws-sdk
  • #1792 GroupByQueryRunnerTest for hyperUnique finalizing post aggregators
  • #1781 Fix failure in nested groupBy with multiple aggregators with same fie…
  • #1790 Cleanup kafka-extraction-namespace
  • #1782 Add analysisTypes to SegmentMetadataQuery cache key
  • #1730 fix #1727 - Union bySegment queries fix
  • #1783 Separate ListColumnIncluderator cache key parts with nul bytes
  • #1740 fix #1715 - Zombie tasks able to acquire locks after failure
  • #1778 Redirect fixes
  • #1777 fail task if finishjob throws any exception
  • #1775 SQLMetadataConnector: Retry table creation, in case something goes wrong.
  • #1772 RemoteTaskRunner: Fix for starting an overlord before any workers ever existed.
  • #1764 Enable logging for memcached in factory
  • #1760 Update memcached client for better concurrency in metrics.
  • #1761 LocalDataSegmentPusher: Fix for Hadoop + relative paths.
  • #1763 fix NPE and duplicate metric keys
  • #1758 Fix memcached cache provider injection and add test
  • #1747 Account for potential gaps in hydrants in sink initialization, hydrant swapping (e.g. h0, h1, h4)
  • #1751 Soften concurrency requirements on IncrementalIndexTest
  • #1736 IngestSegmentFirehostFactoryTimelineTest for overshadowing of the middle of a segment.
  • #1741 Add better concurrency testing to IncrementalIndexTest
  • #1743 Disable metadata publishing attempt in example script
  • #1697 Better logging of URIExtractionNamespace failures due to missing files
  • #1702 do not have dataSource twice in path to segment storage on hdfs
  • #1710 Add some basic latching to concurrency testing in IncrementalIndexTest
  • #1734 fix broken integration-test
  • #1731 fix NPE with regex extraction function
  • #1700 update indexing in the helper to use multiple persists and merge
  • #1721 fix for "java.io.IOException: No FileSystem for scheme: hdfs" error
  • #1694 Better timing and locking in NamespaceExtractionCacheManagerExecutorsTest
  • #1703 add null check for task context.
  • #1637 Make jetty scheduler threads daemon thread
  • #1658 Hopefully add better timeouts and ordering to JDBCExtractionNamespaceTest
  • #1620 Allow long values in the key or value fields for URIExtractionNamespace
  • #1578 Fix UT and documentation to the extraction filter
  • #1687 do not let user override hadoop job settings explicitly provided by druid code
  • #1689 Update LZ4Transcoder to match Compressed strategy factory type.
  • #1685 Close output streams and channels loudly when creating segments.
  • #1686 Replace funky imports with standard ones.
  • #1683 Remove unused Indexer interface.
  • #1632 Inner Query should build on sub query
  • #1676 fix convert segment task
  • #1672 Migrate TestDerbyConnector to a JUnit @rule
  • #1675 update druid-api for jackson 2.4.6
  • #1632 Inner Query should build on sub query
  • #1668 Code cleanup for CachingClusteredClientTest
  • #1669 Upgrade dependencies
  • #1663 TaskActionToolbox: Remove allowOlderVersions, lift interval constraint
  • #1619 update server metrics
  • [#1661](https://gi...
Read more

Druid 0.8.1 - Stable

16 Sep 05:49
Compare
Choose a tag to compare

Updating from 0.8.0

There should be no update concerns and standard updating procedures can be followed for rolling updates

New Features

  • #1259 Experimental Query Time Lookups (QTL) -– Ability to do limited joins at query time.
    Simple example use case is Country Code to Country Name.
  • #1374 Experimental Hadoop batch re-indexing and Delta ingestion.
    Re-Indexing allows you to ingest existing druid segments using a new schema with certain columns removed, changed granularity etc. "Delta" Ingestion allows appending data to existing interval in a datasource. See the new dataSource inputSpec and multi inputSpec for more information.

Improvements

  • #1465 Read Hadoop configuration file from HDFS
  • #1472 Support using combiner for Hadoop ingestion
  • #1506 Better support for null input rows during ingestion
  • #1518 More support added for Azure deep store
  • #1550 Add configuration option to print all HTTP requests to log
  • #1563 #1602 Improved merging performance on Broker
  • #1567 #1568 Improved error logging for segment activities
  • #1596 Improved coordinator console, now a separate maven dependency instead of giant code dump
  • #1601 Reduced lock contention during segment scan
  • #1603 Improved performance of Lexicographic TopNs
  • #1643 helpful cause explaining why SegmentDescriptorInfo did not exist

Improved test coverage for indexing service, ingestion, and coordinator endpoints

Bug Fixes

  • #1406 Fix groupBy breaking when exceeding max intermediate rows
  • #1441 Fix flush errors being suppressed when closing output streams
  • #1469 Fix inconsistent property names for druid.metadata.* properties
  • #1484 JobHelper.ensurePaths will set properties from config properly
  • #1499 Fix groupBy caching with renamed aggregators
  • #1503 Fix leaking indexing service status nodes in ZK
  • #1534 Fix caching for approximate histograms
  • #1616 Fix dependency error in local index task
  • #1627 Fix realtime tasks getting stuck on shutdown even after status being shown as SUCCESS
  • #1634 Allow IrcFirehoseFactory to shutdown cleanly
  • #1640 Package extensions in release tarball + script to run druid servers
  • #1653 Fix success flag emitted in router query metrics
  • #1659 on kill segment, don't leave version, interval and dataSource dir behind on HDFS
  • #1681 Fix overlapping segments not working for ingest segment firehose

Documentation

  • New documentation for firehoses, evaluating Druid, and plenty of fixes.
  • Improved documentation for working with CDH
  • Added instructions for PostgreSQL metadata store
  • More documentation on how to use ApproximateHistograms

The full list of changes can be found here

Thanks

Special thanks to everyone that contributed (code, docs, etc.) to this release!

@drcrallen
@davideanastasia
@guobingkun
@himanshug
@michaelschiff
@fjy
@krismolendyke
@nishantmonu51
@rasahner
@xvrl
@gianm
@pjain1
@samjhecht
@solimant
@sherry-q
@ubercow
@zhaown
@mvfast
@mistercrunch
@pdeva
@KurtYoung
@onlychoice
@b-slim
@cheddar
@MarConSchneid

Druid 0.8.0 - Stable

15 Jul 17:37
Compare
Choose a tag to compare

We recently introduced a backwards incompatible change to the schema Druid uses when it emits metrics. If you are not emitting Druid metrics to an http endpoint, the update procedure should be straightforward.

Updating from 0.7.x

  • If you are emitting Druid metrics to an http endpoint, please consult https://github.com/druid-io/druid/blob/master/docs/content/operations/metrics.md for the new schema used for Druid metrics
  • io.druid.server.metrics.ServerMonitor has been renamed to io.druid.server.metrics.HistoricalMetricsMonitor. You will need to update any configs that contain this change.
  • Correction to one of db index keys requires migration steps described in #1322

Updating from 0.6.x

New Features

  • Redo Druid metrics to use an understandable metrics schema
  • Support compression for multi-value columns
  • Added longMax/longMin aggregators in addition to previous min/max [double] aggregators which have been renamed to appropriate doubleMax/doubleMin
  • Added a hadoop_convert_segment task for the indexer to allow large scale batch re-compression of old data as an indexer task.

Improvements

  • Index task now ignores invalid rows (#1264)
  • Improved segment filtering for dataSourceMetadataQuery (#1299)
  • Numerous additional unit tests

Bug Fixes

  • Fixed deprecated warnings in Hadoop batch indexing (#1275). Thanks @infynyxx!
  • Fix groupBys applying limitSpecs to historicals on post aggregations (#1292). Thanks @guobingkun!
  • Fix incorrectly typed values in metadata sql queries (#1295). THanks @anubhgup!
  • Fix timeBoundary cache serde problems (#1303)
  • Fix serde issue with pulling timestamps from cache (#1304)
  • Fixed concatenated gzip files with static s3 firehose (#1311)
  • Fix audit table config serde problems (#1322)
  • Fix IRC firehose serde (#1331)
  • Fix Arithmetic exceptions on the broker (#1336)
  • Fix an error where the Convert Segment Task would leave zombie tasks if the task failed (#1363)
  • Fixed #1365 to return actual complex metric name in segment metadata query response
  • Fix groupBy caching to work with renamed aggregators (#1499)

Documentation

Druid 0.7.3 - Stable

27 May 20:23
@fjy fjy
Compare
Choose a tag to compare

This release is mainly to get out dimension compression and rework the druid documentation. There are no update concerns with this version of Druid.

New Features

  • Added support for Dimension compression of segment columns, enabled by default. Compression is applied to the column storing the dimension value indices, but not to the dimension values themselves. This change only applies to single value dimensions, multi-value dimensions are left uncompressed. With real-world data we have seen segments sizes reduced by 50% for some datasources, but actual compression ratios will vary based on the data. Sparse and repetitive columns will benefit the most, whereas more random and higher cardinality columns will benefit less. Old segments can be converted using the updated VersionConverterTask.
  • Initial support for Microsoft Azure as a deep storage option has been added. Thanks @davrodpin!

Improvements

  • Improved VersionConverterTask to allow for an IndexSpec and forced updates. This enables the ability to convert old segments to use dimension compression,
  • Improved how the datasource metadata query filters on segments to scan.

Bug Fixes

  • Ignore rows with invalid interval for index task (#1264)
  • always re-upload snapshot self-contained jars to hdfs (#1261)
  • Skip raising false alert when the coordinator loses leadership (#1224)
  • Fix an issue that after broker forwards GroupByQuery to historical, havingSpec is still applied (#1292). Thanks @guobingkun!
  • Fix type incorrect types for update sql statement for metadata storage (#1295). Thanks @anubhgup!
  • fix serde issue when pulling timestamps from cache (#1304)

Documentation

Druid 0.7.1.1 - Stable

09 Apr 23:01
Compare
Choose a tag to compare

New Features

  • Group results by day of week, hour of day, etc.

    We added support for time extraction functions where you can group by results based on anything DateTimeFormatter supports. For more details, see http://druid.io/docs/latest/DimensionSpecs.html#time-format-extraction-function .

  • Audit rule and dynamic configuration changes

    Druid now provides support for remembering why a rule or configuration change was made, and who made the change. Note that you must provide the author and comment fields yourself. The IP which issued the configuration change will be recorded by default. For more details, see headers "X-Druid-Author" and "X-Druid-Comment" on http://druid.io/docs/latest/Coordinator.html

  • Provide support for a password provider for the metadata store

    This enables people to write a module extension which implements the logic for getting a password to the metadata store.

  • Enable servlet filters on Druid nodes

    This enables people to write authentication filters for Druid requests.

Improvements

  • Query parallelization on the broker for long interval queries

    We’ve added the ability to break up a long interval query into multiple shorter interval queries that can be run in parallel. This should improve the performance of more expensive groupBys. For more details, see "chunkPeriod" on http://druid.io/docs/latest/Querying.html#query-context

  • Better schema exploration

    The broker can now return the dimensions and metrics for a datasource broken down by interval.

  • Improved code coverage

    We’ve added numerous unit tests to improve code coverage and will be tracking coverage in the future with Coveralls.

  • Additional ingestion metrics

    Added additional metrics for failed persists and failed handoffs.

  • Configurable InputFormat for batch ingestion (#1177)

Bug Fixes

  • Fixed a bug where sometimes the broker and coordinator would miss announcements of segments, leading to null pointer exceptions. (#1161)
  • Fixed a bug where groupBy queries would fail when aggregators and post-aggregators were named the same (#1044)
  • Fixed a bug where not including a pagingSpec in a select query generates a obscure NPE (#1165). Thanks to friedhardware!
  • "bySegment" groupBy queries should now work (#1180)
  • Honor ignoreInvalidRows in reducer for Hadoop indexing
  • Download dependencies from Maven Central over https
  • Updated MySQL connector to fix issues with recent MySQL versions
  • Fix timeBoundary query on union datasources (#1243)
  • Fix Guice injections for DruidSecondaryModule (#1245)
  • Fix log4j version dependencies (#1239)
  • Fix NPE when partition number 0 does not exist (#1190)
  • Fix arbitrary granularity spec (#1214) and ignore rows with invalid interval for index task (#1264)
  • Fix thread starvation in AsyncQueryForwardingServletTest (#1233)
  • More useful ZooKeeper log messages
  • Various new unit tests for things
  • Updated MapDB to 1.0.7 for bugfixes
  • Fix re-uploading of self-containted SNAPSHOT jars when developing on hadoop (#1261)

Documentation

  • Reworked the flow of Druid documentation and fixed numerous errors along the way.
  • Thanks to @infynxx for fixing many of our broken links!
  • Thanks to @mrijke for many fixes with metrics and emitter configuration.
  • Thanks to @b-slim, @bobrik and @andrewserff for documentation and example fixes

Misc