Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Druid 0.13.0-incubating release notes #6442

Closed
dclim opened this Issue Oct 10, 2018 · 28 comments

Comments

@dclim
Copy link
Contributor

dclim commented Oct 10, 2018

Druid 0.13.0-incubating contains over 400 new features, performance/stability/documentation improvements, and bug fixes from 81 contributors. It is the first release of Druid in the Apache Incubator program. Major new features and improvements include:

  • native parallel batch indexing
  • automatic segment compaction
  • system schema tables
  • improved indexing task status, statistics, and error reporting
  • SQL-compatible null handling
  • result-level broker caching
  • ingestion from RDBMS
  • Bloom filter support
  • additional SQL result formats
  • additional aggregators (stringFirst/stringLast, ArrayOfDoublesSketch, HllSketch)
  • support for multiple grouping specs in groupBy query
  • mutual TLS support
  • HTTP-based worker management
  • broker backpressure
  • maxBytesInMemory ingestion tuning configuration
  • materialized views (community extension)
  • parser for Influx Line Protocol (community extension)
  • OpenTSDB emitter (community extension)

The full list of changes is here: https://github.com/apache/incubator-druid/pulls?q=is%3Apr+is%3Aclosed+milestone%3A0.13.0

Documentation for this release is at: http://druid.io/docs/0.13.0-incubating/

Highlights

Native parallel batch indexing

Introduces the index_parallel supervisor which manages the parallel batch ingestion of splittable sources without requiring a dependency on Hadoop. See http://druid.io/docs/latest/ingestion/native_tasks.html for more information.

Note: This is the initial single-phase implementation and has limitations on how it expects the input data to be partitioned. Notably, it does not have a shuffle implementation which will be added in the next iteration of this feature. For more details, see the proposal at #5543.

Added by @jihoonson in #5492.

Automatic segment compaction

Previously, compacting small segments into optimally-sized ones to improve query performance required submitting and running compaction or re-indexing tasks. This was often a manual process or required an external scheduler to handle the periodic submission of tasks. This patch implements automatic segment compaction managed by the coordinator service.

Note: This is the initial implementation and has limitations on interoperability with realtime ingestion tasks. Indexing tasks currently require acquisition of a lock on the portion of the timeline they will be modifying to prevent inconsistencies from concurrent operations. This implementation uses low-priority locks to ensure that it never interrupts realtime ingestion, but this also means that compaction may fail to make any progress if the realtime tasks are continually acquiring locks on the time interval being compacted. This will be improved in the next iteration of this feature with finer-grained locking. For more details, see the proposal at #4479.

Documentation for this feature: http://druid.io/docs/0.13.0-incubating/design/coordinator.html#compacting-segments

Added by @jihoonson in #5102.

System schema tables

Adds a system schema to the SQL interface which contains tables exposing information on served and published segments, nodes of the cluster, and information on running and completed indexing tasks.

Note: This implementation contains some known overhead inefficiencies that will be addressed in a future patch.

Documentation for this feature: http://druid.io/docs/0.13.0-incubating/querying/sql.html#system-schema

Added by @surekhasaharan in #6094.

Improved indexing task status, statistics, and error reporting

Improves the performance and detail of the ingestion-related APIs which were previously quite opaque making it difficult to determine the cause of parse exceptions, task failures, and the actual output from a completed task. Also adds improved ingestion metric reporting including moving average throughput statistics.

Added by @surekhasaharan and @jon-wei in #5801, #5418, and #5748.

SQL-compatible null handling

Improves Druid's handling of null values by treating them as missing values instead of being equivalent to empty strings or a zero-value. This makes Druid more SQL compatible and improves integration with external BI tools supporting ODBC/JDBC. See #4349 for proposal.

To enable this feature, you will need to set the system-wide property druid.generic.useDefaultValueForNull=false.

Added by @nishantmonu51 in #5278 and #5958.

Results-level broker caching

Implements result-level caching on brokers which can operate concurrently with the traditional segment-level cache. See #4843 for proposal.

Documentation for this feature: http://druid.io/docs/0.13.0-incubating/configuration/index.html#broker-caching

Added by @a2l007 in #5028.

Ingestion from RDBMS

Introduces a sql firehose which supports data ingestion directly from an RDBMS.

Added by @a2l007 in #5441.

Bloom filter support

Adds support for optimizing Druid queries by applying a Bloom filter generated by an external system such as Apache Hive. In the future, #6397 will support generation of Bloom filters as the result of Druid queries which can then be used to optimize future queries.

Added by @nishantmonu51 in #6222.

Additional SQL result formats

Adds result formats for line-based JSON and CSV and additionally X-Druid-Column-Names and X-Druid-Column-Types response headers containing a list of columns contained in the result.

Added by @gianm in #6191.

'stringLast' and 'stringFirst' aggregators

Introduces two complementary aggregators, stringLast and stringFirst which operate on string columns and return the value with the maximum and minimum timestamp respectively.

Added by @andresgomezfrr in #5789.

ArrayOfDoublesSketch

Adds support for numeric Tuple sketches, which extend the functionality of the count distinct Theta sketches by adding arrays of double values associated with unique keys.

Added by @AlexanderSaydakov in #5148.

HllSketch

Adds a configurable implementation of a count distinct aggregator based on HllSketch from https://github.com/DataSketches. Comparison to Druid's native HyperLogLogCollector shows improved accuracy, efficiency, and speed: https://datasketches.github.io/docs/HLL/HllSketchVsDruidHyperLogLogCollector.html

Added by @AlexanderSaydakov in #5712.

Support for multiple grouping specs in groupBy query

Adds support for the subtotalsSpec groupBy parameter which allows Druid to be efficient by reusing intermediate results at the broker level when running multiple queries that group by subsets of the same set of columns. See proposal in #5179 for more information.

Added by @himanshug in #5280.

Mutual TLS support

Adds support for mutual TLS (server certificate validation + client certificate validation). See: https://en.wikipedia.org/wiki/Mutual_authentication

Added by @jon-wei in #6076.

HTTP based worker management

Adds an HTTP-based indexing task management implementation to replace the previous one based on ZooKeeper. Part of a set of improvements to reduce and eventually eliminate Druid's dependency on ZooKeeper. See #4996 for proposal.

Added by @himanshug in #5104.

Broker backpressure

Allows the broker to exert backpressure on data-serving nodes to prevent the broker from crashing under memory pressure when results are coming in faster than they are being read by clients.

Added by @gianm in #6313.

'maxBytesInMemory' ingestion tuning configuration

Previously, a major tuning parameter for indexing task memory management was the maxRowsInMemory configuration, which determined the threshold for spilling the contents of memory to disk. This was difficult to properly configure since the 'size' of a row varied based on multiple factors. maxBytesInMemory makes this configuration byte-based instead of row-based.

Added by @surekhasaharan in #5583.

Materialized views

Supports the creation of materialized views which can improve query performance in certain situations at the cost of additional storage. See http://druid.io/docs/latest/development/extensions-contrib/materialized-view.html for more information.

Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.

Added by @zhangxinyu1 in #5556.

Parser for Influx Line Protocol

Adds support for ingesting the Influx Line Protocol data format. For more information, see: https://docs.influxdata.com/influxdb/v1.6/write_protocols/line_protocol_tutorial/

Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.

Added by @njhartwell in #5440.

OpenTSDB emitter

Adds support for emitting Druid metrics to OpenTSDB.

Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.

Added by @QiuMM in #5380.

Updating from 0.12.3 and earlier

Please see below for changes between 0.12.3 and 0.13.0 that you should be aware of before upgrading. If you're updating from an earlier version than 0.12.3, please see release notes of the relevant intermediate versions for additional notes.

MySQL metadata storage extension no longer includes JDBC driver

The MySQL metadata storage extension is now packaged together with the Druid distribution but without the required MySQL JDBC driver (due to licensing restrictions). To use this extension, the driver will need to be downloaded separately and added to the extension directory.

See http://druid.io/docs/latest/development/extensions-core/mysql.html for more details.

AWS region configuration required for S3 extension

As a result of switching from jets3t to the AWS SDK (#5382), users of the S3 extension are now required to explicitly set the target region. This can be done by setting the JVM system property aws.region or the environment variable AWS_REGION.

As an example, to set the region to 'us-east-1' through system properties:

  • add -Daws.region=us-east-1 to the jvm.config file for all Druid services
  • add -Daws.region=us-east-1 to druid.indexer.runner.javaOpts in middleManager/runtime.properties so that the property will be passed to peon (worker) processes

Ingestion spec changes

As a result of renaming packaging from io.druid to org.apache.druid, ingestion specs that reference classes by their fully-qualified class name will need to be modified accordingly.

As an example, if you are using the Parquet extension with Hadoop indexing, the inputFormat field of the inputSpec will need to change from io.druid.data.input.parquet.DruidParquetInputFormat to org.apache.druid.data.input.parquet.DruidParquetInputFormat.

Metrics changes

New metrics

  • task/action/log/time - Milliseconds taken to log a task action to the audit log (#5714)
  • task/action/run/time - Milliseconds taken to execute a task action (#5714)
  • query/node/backpressure - Nanoseconds the channel is unreadable due to backpressure being applied (#6335) (Note that this is not enabled by default and requires a custom implementation of QueryMetrics to emit)

New dimensions

  • taskId and taskType added to task-related metrics (#5664)

Other

  • HttpPostEmitterMonitor no longer emits maxTime and minTime if no times were recorded (#6418)

Rollback restrictions

64-bit doubles aggregators

64-bit doubles aggregators are now used by default (see #5478). Support for 64-bit floating point columns was release in Druid 0.11.0, so if this is enabled, versions older than 0.11.0 will not be able to read the data segments.

To disable and keep the old format, you will need to set the system-wide property druid.indexing.doubleStorage=float.

Disabling bitmap indexes

0.13.0 adds support for disabling bitmap indexes on a per-column basis, which can save space in cases where bitmap indexes add no value. This is done by setting the 'createBitmapIndex' field in the dimension schema. Segments written with this option will not be backwards compatible with older versions of Druid (#5402).

utf8mb4 is now the recommended metadata storage charset

For upgrade instructions, use the ALTER DATABASE and ALTER TABLE instructions as described here: https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-conversion.html.

For motivation and reference, see #5377 and #5411.

Removed configuration properties

  • druid.indexer.runner.tlsStartPort has been removed (#6194).
  • druid.indexer.runner.separateIngestionEndpoint has been removed (#6263).

Behavior changes

ParseSpec is now a required field in ingestion specs

There is no longer a default ParseSpec (previously the DelimitedParseSpec). Ingestion specs now require parseSpec to be specified. If you previously did not provide a parseSpec, you should use one with "format": "tsv" to maintain the existing behavior (#6310).

Change to default maximum rows to return in one JDBC frame

The default value for druid.sql.avatica.maxRowsPerFrame was reduced from 100k to 5k to minimize out of memory errors (#5409).

Router behavior change when routing to brokers dedicated to different time ranges

As a result of #5595, routers may now select an undesired broker in configurations where there are different tiers of brokers that are intended to be dedicated to queries on different time ranges. See #1362 for discussion.

Ruby TimestampSpec no longer ignores milliseconds

Timestamps parsed using a TimestampSpec with format 'ruby' no longer truncates the millisecond component. If you were using this parser and wanted a query granularity of SECOND, ensure that it is configured appropriately in your indexing specs (#6217).

Small increase in size of ZooKeeper task announcements

The datasource name was added to TaskAnnouncement which will result in a small per task increase in the amount of data stored in ZooKeeper (#5511).

Addition of 'name' field to filtered aggregators

Aggregators of type 'filtered' now support a 'name' field. Previously, the filtered aggregator inherited the name of the aggregator it wrapped. If you have provided the 'name' field for both the filtered aggregator and the wrapped aggregator, it will prefer the name of the filtered aggregator. It will use the name of the wrapped aggregator if the name of the filtered aggregator is missing or empty (#6219).

Interface changes for extension developers

  • Packaging has been renamed from io.druid to org.apache.druid. All third-party extensions will need to rename their META-INF/io.druid.initialization.DruidModule to org.apache.druid.initialization.DruidModule and update their extension's packaging appropriately (#6266).

  • The DataSegmentPuller interface has been removed (#5461).

  • A number of functions under java-util have been removed (#5461).

  • The constructor of the Metadata class has changed (#5613).

  • The 'spark2' Maven profile has been removed (#5581).

API deprecations

Overlord

  • The /druid/indexer/v1/supervisor/{id}/shutdown endpoint has been deprecated in favor of /druid/indexer/v1/supervisor/{id}/terminate (#6272 and #6234).
  • The /druid/indexer/v1/task/{taskId}/segments endpoint has been deprecated (#6368).
  • The status field returned by /druid/indexer/v1/task/ID/status has been deprecated in favor of statusCode (#6334).
  • The reportParseExceptions and ignoreInvalidRows parameters for ingestion tuning configurations have been deprecated in favor of logParseExceptions and maxParseExceptions (#5418).

Broker

  • The /druid/v2/datasources/{dataSourceName}/dimensions endpoint has been deprecated. A segment metadata query or the INFORMATION_SCHEMA SQL table should be used instead (#6361).
  • The /druid/v2/datasources/{dataSourceName}/metrics endpoint has been deprecated. A segment metadata query or the INFORMATION_SCHEMA SQL table should be used instead (#6361).

Credits

Thanks to everyone who contributed to this release!

@a2l007
@adursun
@ak08
@akashdw
@aleksi75
@AlexanderSaydakov
@alperkokmen
@amalakar
@andresgomezfrr
@apollotonkosmo
@asdf2014
@awelsh93
@b-slim
@bolkedebruin
@Caroline1000
@chengchengpei
@clintropolis
@dansuzuki
@dclim
@DiegoEliasCosta
@dragonls
@drcrallen
@dyanarose
@dyf6372
@Dylan1312
@erikdubbelboer
@es1220
@evasomething
@fjy
@Fokko
@gaodayue
@gianm
@hate13
@himanshug
@hoesler
@jaihind213
@jcollado
@jihoonson
@jim-slattery-rs
@jkukul
@jon-wei
@josephglanville
@jsun98
@kaijianding
@KenjiTakahashi
@kevinconaway
@korvit0
@leventov
@lssenthilkumar
@mhshimul
@niketh
@NirajaM
@nishantmonu51
@njhartwell
@palanieppan-m
@pdeva
@pjain1
@QiuMM
@redlion99
@rpless
@samarthjain
@Scorpil
@scrawfor
@shiroari
@shivtools
@siddharths
@SpyderRivera
@spyk
@stuartmclean
@surekhasaharan
@susielu
@varaga
@vineshcpaul
@vvc11
@wysstartgo
@xvrl
@yunwan
@yuppie-flu
@yurmix
@zhangxinyu1
@zhztheplayer

@dclim dclim added this to the 0.13.0 milestone Oct 10, 2018

@pdeva

This comment has been minimized.

Copy link
Contributor

pdeva commented Oct 15, 2018

Support for 64-bit floating point columns was release in Druid 0.11.0, so if this is enabled, versions older than 0.11.0 will not be able to read the data segments.

this was a little confusing when i read this first.

please consider rephrasing to:

64-bit doubles aggregators are now used by default (see #5478). Support for 64-bit floating point columns was only released in Druid 0.11.0.

If you are upgrading from a version older than 0.11 directly to 0.13,  your nodes will not be able to read the data segments. Thus, for users upgrading from version older than 0.11, either:

1.  disable and keep the old format. For this, you will need to set the system-wide property druid.indexing.doubleStorage=float.

2. First upgrade to 0.11, and then to 0.13


@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Oct 15, 2018

@pdeva thank you for the comment! maybe @nishantmonu51 can confirm, but I don't think that's exactly accurate. 0.13 will not have trouble reading segments generated before 0.11; the issue is that since the 64-bits doubles aggregator was added in 0.11, if you enabled this feature and then rolled back to a version of Druid prior to 0.11.0, that's when you would run into compatibility issues. Is the comment more clear now with that context?

@pdeva

This comment has been minimized.

Copy link
Contributor

pdeva commented Oct 15, 2018

i am actually more confused now.

is this correct, in a gist:

  • if you are upgrading from druid 0.11 or 0.12, you dont need to worry.
  • if you are upgrading from < 0.11, things may go bad if:
    xxxxxx
@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Oct 16, 2018

@pdeva

  • Prior to 0.11, Druid only supported using 32-bit floats for the storage layer and so would not be able to understand segments that used 64-bit doubles.
  • After 0.11, Druid now also supports 64-bit doubles for storage, but up until this release, it was still using 32-bit floats by default unless 64-bit doubles were explicitly enabled.
  • In this release, 64-bit doubles will now be the default unless druid.indexing.doubleStorage=float is set.

Hence, for this specific feature, there is no issue upgrading from previous versions to 0.13, but if you don't explicitly configure 0.13 to use 32-bit floats to store double values, you will not be able to rollback to a version before 0.11.

@drcrallen drcrallen added the Apache label Oct 22, 2018

@glasser

This comment has been minimized.

Copy link
Contributor

glasser commented Oct 29, 2018

@dclim I see on the dev@ list that you've proposed a release candidate! Exciting!

Has the RC been published to any maven repository so we could test it directly, or are there just the binary tarballs?

@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Oct 29, 2018

@glasser I'm working on getting the RC published to Apache Nexus and will let you know when this is done. There will definitely be an RC2 coming as well (with some bugfixes + some changes to conform to Apache's licensing policy)

@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Nov 16, 2018

@glasser Just to close the loop here - RC2 has now been published along with the proposed Maven artifacts. The artifacts are in a staging directory on Apache Nexus and will be promoted for release once that vote is approved. You can view them here: https://repository.apache.org/content/repositories/orgapachedruid-1000/

@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Nov 17, 2018

@glasser FYI - we discovered a critical bug hours after RC2 was published, so there is now an RC3. Maven artifacts for RC3 are here: https://repository.apache.org/content/repositories/orgapachedruid-1001/

@gianm

This comment has been minimized.

Copy link
Contributor

gianm commented Nov 20, 2018

@dclim I noticed in testing that with the switch to the AWS S3 SDK instead of jets3t, a non-EC2 cluster that is using S3 as deep storage fails to start up properly. The issue is that you need to have an AWS region set (unlike jets3t, it won't default to us-east-1).

One way to work around this is:

  • Add -Daws.region=us-east-1 to all jvm.configs.
  • And, add -Daws.region=us-east-1 to druid.indexer.runner.javaOpts in middleManager/runtime.properties.

Probably setting the AWS_REGION=us-east-1 environment variable would help too, but I haven't tested that. I believe this is only an issue if you are not running on EC2. On EC2 it seems to pick up the current region as the default region.

@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Nov 20, 2018

@gianm thank you, I can add this to the release notes. I suppose this should also be added to the S3 extension documentation, but I'm not clear on whether that would necessitate another release candidate (since the documentation is packaged in the release artifacts).

@pdeva

This comment has been minimized.

Copy link
Contributor

pdeva commented Dec 6, 2018

the link to mysql doc does not describe how to get and where to place the mysql jdbc driver.
it still refers to the old method of using pull deps to pull the entire extension

@pdeva

This comment has been minimized.

Copy link
Contributor

pdeva commented Dec 6, 2018

doesn’t the aws region property also need to be applied to historical configs, since they download from s3

@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Dec 7, 2018

@pdeva when 0.13.0 is released, the docs will be updated and /latest should point to the new docs that describe where to put the mysql driver. You can see the docs here: https://github.com/apache/incubator-druid/blob/master/docs/content/development/extensions-core/mysql.md

Yes, the AWS region property also needs to be applied to the historical configs. The release notes mention adding it to the jvm.config for all Druid services.

@dclim dclim changed the title [WIP] Druid 0.13.0-incubating release notes Druid 0.13.0-incubating release notes Dec 12, 2018

@dclim dclim closed this Dec 12, 2018

@dclim dclim added this to Done in Apache Incubation via automation Dec 12, 2018

@pantaoran

This comment has been minimized.

Copy link

pantaoran commented Dec 14, 2018

@pdeva when 0.13.0 is released, the docs will be updated and /latest should point to the new docs that describe where to put the mysql driver.

This sounded reasonable, but it has not happened. I have a tool watching for Druid releases and alerting me to them, so now that it has been released, I wanted to read the release notes. However, many links still point to now non-existing /docs/0.13.0-incubating/ pages, so I can't read the new documentation. I tried replacing the tag with latest, but then I still see 0.12.3 documentation.

Could you please be careful to really point the latest documentation correctly when doing releases? I've had issues with this several times in the past already. It can't possibly be in your best interest to publish exciting new releases only to have many dead links in the release notes. Thank you :-)

@aleksi75

This comment has been minimized.

Copy link
Contributor

aleksi75 commented Dec 14, 2018

I don't think that 0.13.0 is officially released. If you look here: http://druid.io/downloads.html, the latest stable release is 0.12.3.

@pantaoran

This comment has been minimized.

Copy link

pantaoran commented Dec 14, 2018

Interesting, I had not considered that possibility. All I'm looking at are the releases on GitHub.

@aleksi75

This comment has been minimized.

Copy link
Contributor

aleksi75 commented Dec 14, 2018

Me too =) ...but I have to use the compiled package and so I found (sadly) that there is none

@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Dec 14, 2018

@pantaoran apologies for the confusion, @aleksi75 is correct that 0.13.0 isn't officially released quite yet, we're just working on some final steps to prepare. I probably was a bit quick on creating the release tag (I was trying to get the 0.13.0 links and documentation tested in a staging environment and created the tag so the corresponding links would work). The Druid website will be updated and announcements will be sent out when it's official (which should be very soon).

@love1693294577

This comment has been minimized.

Copy link

love1693294577 commented Dec 18, 2018

@pantaoran apologies for the confusion, @aleksi75 is correct that 0.13.0 isn't officially released quite yet, we're just working on some final steps to prepare. I probably was a bit quick on creating the release tag (I was trying to get the 0.13.0 links and documentation tested in a staging environment and created the tag so the corresponding links would work). The Druid website will be updated and announcements will be sent out when it's official (which should be very soon).

Hello, for this precise calculation, what are your good methods and plans?

@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Dec 18, 2018

@pantaoran @aleksi75 @love1693294577 just wanted to let you know that 0.13.0-incubating is now officially released! You can get it from: http://druid.io/downloads.html

@pdeva

This comment has been minimized.

Copy link
Contributor

pdeva commented Dec 18, 2018

what does 'incubating' stand for in the release name in this case?
is this release not ready for production?

@dclim

This comment has been minimized.

Copy link
Contributor Author

dclim commented Dec 18, 2018

@pdeva 'incubating' denotes that Druid is currently under the Apache Incubator program (as opposed to having graduated to a full-fledged top level Apache project). All projects enter the Apache foundation through the incubator. It has nothing to do with quality or suitability for production. All GA druid releases are suitable for production usage.

@aleksi75

This comment has been minimized.

Copy link
Contributor

aleksi75 commented Dec 19, 2018

@pantaoran @aleksi75 @love1693294577 just wanted to let you know that 0.13.0-incubating is now officially released! You can get it from: http://druid.io/downloads.html
Yippie a yeah.

@dragonls

This comment has been minimized.

Copy link
Contributor

dragonls commented Dec 21, 2018

It seems that the released file is a little strange.

I downloaded the file from:
http://mirrors.hust.edu.cn/apache/incubator/druid/0.13.0-incubating/apache-druid-0.13.0-incubating-bin.tar.gz
http://mirrors.hust.edu.cn/apache/incubator/druid/0.13.0-incubating/apache-druid-0.13.0-incubating-bin.tar.gz
SHA1: e923fa8d2f4c79b28218a401f11aa2d379f46c59

But the file seems abnormal:
image
image

There are some files having the incomplete file names. Is that something wrong during the distribution?

@pantaoran

This comment has been minimized.

Copy link

pantaoran commented Dec 21, 2018

@dragonls I downloaded the file from 2 different mirrors, the SHA1 sum is the same as yours.
When I extract it, everything seems fine. Did something go wrong during your decompression?

@dragonls

This comment has been minimized.

Copy link
Contributor

dragonls commented Dec 21, 2018

Emmm, interesting...
Maybe there is something wrong during my decompression. Anyway, @pantaoran thanks a lot.

I tried tar -zxvf xxx.tar.gz, all results are ok. Maybe that is a bug in 7zip.

@indrekj

This comment has been minimized.

Copy link

indrekj commented Mar 22, 2019

@dclim We're using s3 as deep storage and we had an incident when upgrading druid.

Our s3 access policy is:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "S3:*",
            "Resource": [
                "arn:aws:s3:::retracted-druid-dev/acceptance/*",
                "arn:aws:s3:::retracted-druid-dev/acceptance-indexer-logs/*"
            ],
            "Condition": {}
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::retracted-druid-dev"
            ],
            "Condition": {}
        }
    ]
}

Peons however started failing with:

Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: DA6A50FF8662CAC3; S3 Extended Request ID: UvfZA/YEDbKKlgCUS6u4Tk0e9DTyZkLJqY2iPgYyYOCy08gnLPTkbN1HPy7kyaDd0tkwABIve7A=)
...
com.amazonaws.services.s3.AmazonS3Client.getBucketAcl(AmazonS3Client.java:1150) ~[aws-java-sdk-s3-1.11.199.jar:?]
        at org.apache.druid.storage.s3.ServerSideEncryptingAmazonS3.getBucketAcl(ServerSideEncryptingAmazonS3.java:70) ~[?:?]
        at org.apache.druid.storage.s3.S3Utils.grantFullControlToBucketOwner(S3Utils.java:199) ~[?:?]

It didn't even retry. The peons were shutdown and the data was lost.

We fixed it by adding s3:GetBucketAcl to the policy. This probably should also be mentioned in the release notes.

@jihoonson

This comment has been minimized.

Copy link
Contributor

jihoonson commented Mar 22, 2019

Hi @indrekj, it's documented in the master branch, but our doc is not updated yet. Sorry for inconvenience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.