Skip to content

smartlocus/druid

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Official druid helmchart forked and modified for easily andd functioning purposes

# added S3 extensiin storage. Create your own S3 stoarge or use S3 services from amazon for exaple. i am using in
my cae my own S3 stoareg using rook ceph that i manage personally. Note you need to chnaghe the S3 endpoint url and the secretkey
and Accesskey credentials

  druid_storage_type: s3
  druid_storage_bucket: datalake-bucket (chnage this to your bucket name)
  druid_storage_baseKey: druid/segments
  druid_s3_accessKey: BCS8W44B9Z06ZH(chnahe this )
  druid_s3_secretKey: nuWyyc4hC3h1Cfb54c8m3FJow6Lr(change this)
  druid_s3_protocol: http
  druid_s3_endpoint_url: http://10.#.#.#97:80


# added Postgres connection for storing the metadata. Deploy your own Postgres Container and create a database with the name druid inside it.

druid_extensions_loadList: '["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-stor>
  druid_metadata_storage_type: postgresql
  druid_metadata_storage_connector_connectURI: jdbc:postgresql://sl-postgres.building00030:5432/druid   # buildiong00030 is the namespace.Change this to the namespace where your postgres container is running.
  druid_metadata_storage_connector_user: myuser (chnage this also to the username you set in your postgres)
  druid_metadata_storage_connector_password: mypassword (chnage this also to the password you set in ypur postgres)


# If you have any running kafka cluster you want to consume from, modify the follwoin g part in the values.yaml file
  druid_kafka_extraction_namespace: |
    [
      {
        "namespace": "customer-lookup",
        "type": "kafka",
        "kafkaTopic": "enviro_info_forecast_rt_feed",
        "kafkaProperties": {
          "bootstrap.servers": "my-cluster-kafka-bootstrap.kafka:9092"  # This is the kafka endpoint bootstrap server
        }
      }
    

Note i have disabled and removed persistence i had before. If you need persisitence create your peristent volume claims and add them to each container. When you deploy this, you will have druid-broker-6f685d4478-rqtq8,druid-coordinator-5746cbcb8d-kjflz,druid-historical-0,druid-middle-manager-0 Containers running. Be sure before hand before you deploy druid that you have set up your postgres and S3 Storage correctly .

Coverage Status Docker Helm

Workflow Status
βš™οΈ CodeQL Config codeql-config
πŸ” CodeQL codeql
πŸ•’ Cron Job ITS cron-job-its
🏷️ Labeler labeler
♻️ Reusable Revised ITS reusable-revised-its
♻️ Reusable Standard ITS reusable-standard-its
♻️ Reusable Unit Tests reusable-unit-tests
πŸ”„ Revised ITS revised-its
πŸ”§ Standard ITS standard-its
πŸ› οΈ Static Checks static-checks
πŸ§ͺ Unit and Integration Tests Unified unit-and-integration-tests-unified
πŸ”¬ Unit Tests unit-tests

Website Twitter Download Get Started Documentation Community Build Contribute License


Apache Druid

Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.

Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The design documentation explains the key concepts.

Getting started

You can get started with Druid with our local or Docker quickstart.

Druid provides a rich set of APIs (via HTTP and JDBC) for loading, managing, and querying your data. You can also interact with Druid via the built-in web console (shown below).

Load data

data loader Kafka

Load streaming and batch data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.

Manage the cluster

management

Manage your cluster with ease. Get a view of your datasources, segments, ingestion tasks, and services from one convenient location. All powered by SQL systems tables, allowing you to see the underlying query for each view.

Issue queries

query view combo

Use the built-in query workbench to prototype DruidSQL and native queries or connect one of the many tools that help you make the most out of Druid.

Documentation

See the latest documentation for the documentation for the current official release. If you need information on a previous release, you can browse previous releases documentation.

Make documentation and tutorials updates in /docs using MarkDown and contribute them using a pull request.

Community

Visit the official project community page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.

  • Druid users can find help in the druid-user mailing list on Google Groups, and have more technical conversations in #troubleshooting on Slack.
  • Druid development discussions take place in the druid-dev mailing list (dev@druid.apache.org). Subscribe by emailing dev-subscribe@druid.apache.org. For live conversations, join the #dev channel on Slack.

Check out the official community page for details of how to join the community Slack channels.

Find articles written by community members and a calendar of upcoming events on the project site - contribute your own events and articles by submitting a PR in the apache/druid-website-src repository.

Building from source

Please note that JDK 8 or JDK 11 is required to build Druid.

See the latest build guide for instructions on building Apache Druid from source.

Contributing

Please follow the community guidelines for contributing.

For instructions on setting up IntelliJ dev/intellij-setup.md

License

Apache License, Version 2.0