Skip to content
Permalink
Browse files
switch links from druid.io to druid.apache.org (#7914)
* switch links from druid.io to druid.apache.org

* fix it
  • Loading branch information
clintropolis authored and fjy committed Jun 18, 2019
1 parent e80297e commit 71997c16a244cfd6abb5a330fe5b1e808569a268
Showing 44 changed files with 79 additions and 86 deletions.
@@ -31,29 +31,26 @@ Apache Druid (incubating) is a high performance analytics data store for event-d

### More Information

More information about Druid can be found on <http://www.druid.io>.
More information about Druid can be found on <https://druid.apache.org>.

### Documentation

You can find the [documentation for the latest Druid release](http://druid.io/docs/latest/) on
the [project website](http://druid.io/docs/latest/).
You can find the [documentation for the latest Druid release](https://druid.apache.org/docs/latest/) on
the [project website](https://druid.apache.org/docs/latest/).

If you would like to contribute documentation, please do so under
`/docs/content` in this repository and submit a pull request.

### Getting Started

You can get started with Druid with our [quickstart](http://druid.io/docs/latest/tutorials/quickstart.html).
You can get started with Druid with our [quickstart](https://druid.apache.org/docs/latest/tutorials/quickstart.html).

### Reporting Issues

If you find any bugs, please file a [GitHub issue](https://github.com/apache/incubator-druid/issues).

### Community

The Druid community is in the process of migrating to Apache by way of the Apache Incubator. Eventually, as we proceed
along this path, our site will move from http://druid.io/ to https://druid.apache.org/.

Community support is available on the
[druid-user mailing list](https://groups.google.com/forum/#!forum/druid-user)(druid-user@googlegroups.com), which
is hosted at Google Groups.
@@ -72,5 +69,5 @@ For instructions on building Druid from source, see [docs/content/development/bu

### Contributing

Please follow the guidelines listed [here](http://druid.io/community/).
Please follow the guidelines listed [here](https://druid.apache.org/community/).

@@ -18,17 +18,13 @@ under the License.


Apache Druid (incubating) is a high performance analytics data store for event-driven data. More information about Druid
can be found on http://www.druid.io.

The Druid community is in the process of migrating to Apache by way of the Apache Incubator. Eventually, as we proceed
along this path, our site will move from http://druid.io/ to https://druid.apache.org/.

can be found on https://druid.apache.org.

Documentation
-------------
You can find the documentation for {THIS_OR_THE_LATEST} Druid release on the project website http://druid.io/docs/{DRUIDVERSION}/.
You can find the documentation for {THIS_OR_THE_LATEST} Druid release on the project website https://druid.apache.org/docs/{DRUIDVERSION}/.

You can get started with Druid with our quickstart at http://druid.io/docs/{DRUIDVERSION}/tutorials/quickstart.html.
You can get started with Druid with our quickstart at https://druid.apache.org/docs/{DRUIDVERSION}/tutorials/quickstart.html.


Build from Source
@@ -77,7 +73,7 @@ Contributing
------------
If you find any bugs, please file a GitHub issue at https://github.com/apache/incubator-druid/issues.

If you wish to contribute, please follow the guidelines listed at http://druid.io/community/.
If you wish to contribute, please follow the guidelines listed at https://druid.apache.org/community/.


Disclaimer: Apache Druid is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the
@@ -50,7 +50,7 @@ committer who visits an issue or a PR authored by a non-committer.
API elements (`@PublicApi` or `@ExtensionPoint`), runtime configuration options, emitted metric names, HTTP endpoint
behavior, or server behavior in some way that affects one of the following:

- Ability to do a rolling update [as documented](http://druid.io/docs/latest/operations/rolling-updates.html)
- Ability to do a rolling update [as documented](https://druid.apache.org/docs/latest/operations/rolling-updates.html)
without needing any modifications to server configurations or query workload.
- Ability to roll back a Druid cluster to a prior version.
- Ability to continue using old Druid extensions without recompiling them.
@@ -35,8 +35,8 @@ ingestion method.
| Parallel indexing | Always parallel | Parallel if firehose is splittable | Always sequential |
| Supported indexing modes | Replacing mode | Both appending and replacing modes | Both appending and replacing modes |
| External dependency | Hadoop (it internally submits Hadoop jobs) | No dependency | No dependency |
| Supported [rollup modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
| Supported partitioning methods | [Both Hash-based and range partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
| Supported [rollup modes](./index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
| Supported partitioning methods | [Both Hash-based and range partitioning](./hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
| Supported input locations | All locations accessible via HDFS client or Druid dataSource | All implemented [firehoses](./firehose.html) | All implemented [firehoses](./firehose.html) |
| Supported file formats | All implemented Hadoop InputFormats | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom extension](../development/modules.html) implementing [`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java) | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom extension](../development/modules.html) implementing [`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java) |
| Saving parse exceptions in ingestion report | Currently not supported | Currently not supported | Supported |
@@ -18,7 +18,7 @@
# under the License.
############################
# This script downloads the appropriate log4j2 jars and runs jconsole with them as plugins.
# This script can be used as an example for how to connect to a druid.io instance to
# This script can be used as an example for how to connect to a Druid instance to
# change the logging parameters at runtime
############################

@@ -23,7 +23,7 @@

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
@@ -23,7 +23,7 @@

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
@@ -23,7 +23,7 @@

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
@@ -23,7 +23,7 @@

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
@@ -23,7 +23,7 @@

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
@@ -23,7 +23,7 @@

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
@@ -26,4 +26,4 @@ Overview

Documentation
=============
See the druid.io website or under [Druid Github Repo](https://github.com/apache/incubator-druid/tree/master/docs/content/development/extensions-contrib/moving-average-query.md).
See the druid.apache.org website or under [Druid Github Repo](https://github.com/apache/incubator-druid/tree/master/docs/content/development/extensions-contrib/moving-average-query.md).
@@ -19,4 +19,4 @@

This module contains a simple implementation of [SslContext](http://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLContext.html)
that will be injected to be used with HttpClient that Druid nodes use internally to communicate with each other.
More details [here](http://druid.io/docs/latest/development/extensions-core/simple-client-sslcontext.html).
More details [here](https://druid.apache.org/docs/latest/development/extensions-core/simple-client-sslcontext.html).
@@ -103,7 +103,7 @@ public void testFolding()
*
* When reaching very large cardinalities (>> 50,000,000), offsets are mismatched between the main HLL and the ones
* with 100 values, requiring a floating max as described in
* http://druid.io/blog/2014/02/18/hyperloglog-optimizations-for-real-world-systems.html
* https://druid.apache.org/blog/2014/02/18/hyperloglog-optimizations-for-real-world-systems.html
*/
@Ignore
@Test
@@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = localhost
[ req_ext ]
@@ -62,7 +62,7 @@ default_md = default
preserve = no
policy = policy_match
serial = certs.seq
email_in_dn=integration-test@druid.io
email_in_dn=integration-test@druid.apache.org
[req]
default_bits = 4096
@@ -77,7 +77,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = itroot
[ v3_ca ]
@@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = localhost
[ req_ext ]
@@ -32,7 +32,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = thisisprobablynottherighthostname
[ req_ext ]
@@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=bad-intermediate@druid.io
emailAddress=bad-intermediate@druid.apache.org
CN = badintermediate
[ req_ext ]
@@ -62,7 +62,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=basic-constraint-fail@druid.io
emailAddress=basic-constraint-fail@druid.apache.org
CN = localhost
[ req_ext ]
@@ -38,7 +38,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = ${MY_IP}
[ req_ext ]
@@ -32,7 +32,7 @@ ST=DR
L=Druid City
O=Druid
OU=RevokedIntegrationTests
emailAddress=revoked-it-cert@druid.io
emailAddress=revoked-it-cert@druid.apache.org
CN = localhost
[ req_ext ]
@@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = localhost
[ req_ext ]
@@ -31,7 +31,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=intermediate@druid.io
emailAddress=intermediate@druid.apache.org
CN = intermediate
[ req_ext ]
@@ -62,7 +62,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=intermediate-client@druid.io
emailAddress=intermediate-client@druid.apache.org
CN = localhost
[ req_ext ]
@@ -40,7 +40,7 @@ ST=DR
L=Druid City
O=Druid
OU=IntegrationTests
emailAddress=integration-test@druid.io
emailAddress=integration-test@druid.apache.org
CN = itroot

[ v3_ca ]
@@ -65,7 +65,7 @@
*/
@Command(
name = "broker",
description = "Runs a broker node, see http://druid.io/docs/latest/Broker.html for a description"
description = "Runs a broker node, see https://druid.apache.org/docs/latest/Broker.html for a description"
)
public class CliBroker extends ServerRunnable
{
@@ -98,7 +98,7 @@
*/
@Command(
name = "coordinator",
description = "Runs the Coordinator, see http://druid.io/docs/latest/Coordinator.html for a description."
description = "Runs the Coordinator, see https://druid.apache.org/docs/latest/Coordinator.html for a description."
)
public class CliCoordinator extends ServerRunnable
{
@@ -217,8 +217,8 @@ public void configure(Binder binder)
throw new UnsupportedOperationException(
"'druid.coordinator.merge.on' is not supported anymore. "
+ "Please consider using Coordinator's automatic compaction instead. "
+ "See http://druid.io/docs/latest/operations/segment-optimization.html and "
+ "http://druid.io/docs/latest/operations/api-reference.html#compaction-configuration for more "
+ "See https://druid.apache.org/docs/latest/operations/segment-optimization.html and "
+ "https://druid.apache.org/docs/latest/operations/api-reference.html#compaction-configuration for more "
+ "details about compaction."
);
}
@@ -41,7 +41,7 @@
*/
@Command(
name = "hadoop",
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/latest/Batch-ingestion.html for a description."
description = "Runs the batch Hadoop Druid Indexer, see https://druid.apache.org/docs/latest/Batch-ingestion.html for a description."
)
public class CliHadoopIndexer implements Runnable
{
@@ -59,7 +59,7 @@
*/
@Command(
name = "historical",
description = "Runs a Historical node, see http://druid.io/docs/latest/Historical.html for a description"
description = "Runs a Historical node, see https://druid.apache.org/docs/latest/Historical.html for a description"
)
public class CliHistorical extends ServerRunnable
{
@@ -55,7 +55,7 @@
*/
@Command(
name = "hadoop-indexer",
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/latest/Batch-ingestion.html for a description."
description = "Runs the batch Hadoop Druid Indexer, see https://druid.apache.org/docs/latest/Batch-ingestion.html for a description."
)
public class CliInternalHadoopIndexer extends GuiceRunnable
{
@@ -69,7 +69,7 @@
*/
@Command(
name = "middleManager",
description = "Runs a Middle Manager, this is a \"task\" node used as part of the remote indexing service, see http://druid.io/docs/latest/design/middlemanager.html for a description"
description = "Runs a Middle Manager, this is a \"task\" node used as part of the remote indexing service, see https://druid.apache.org/docs/latest/design/middlemanager.html for a description"
)
public class CliMiddleManager extends ServerRunnable
{
@@ -126,7 +126,7 @@
*/
@Command(
name = "overlord",
description = "Runs an Overlord node, see http://druid.io/docs/latest/Indexing-Service.html for a description"
description = "Runs an Overlord node, see https://druid.apache.org/docs/latest/Indexing-Service.html for a description"
)
public class CliOverlord extends ServerRunnable
{
@@ -117,7 +117,7 @@
@Command(
name = "peon",
description = "Runs a Peon, this is an individual forked \"task\" used as part of the indexing service. "
+ "This should rarely, if ever, be used directly. See http://druid.io/docs/latest/design/peons.html for a description"
+ "This should rarely, if ever, be used directly. See https://druid.apache.org/docs/latest/design/peons.html for a description"
)
public class CliPeon extends GuiceRunnable
{
@@ -39,7 +39,7 @@
*/
@Command(
name = "realtime",
description = "Runs a realtime node, see http://druid.io/docs/latest/Realtime.html for a description"
description = "Runs a realtime node, see https://druid.apache.org/docs/latest/Realtime.html for a description"
)
public class CliRealtime extends ServerRunnable
{
@@ -50,7 +50,7 @@
*/
@Command(
name = "realtime",
description = "Runs a standalone realtime node for examples, see http://druid.io/docs/latest/Realtime.html for a description"
description = "Runs a standalone realtime node for examples, see https://druid.apache.org/docs/latest/Realtime.html for a description"
)
public class CliRealtimeExample extends ServerRunnable
{

0 comments on commit 71997c1

Please sign in to comment.