Skip to content

Commit

Permalink
Merge branch 'v0.10.0'
Browse files Browse the repository at this point in the history
  • Loading branch information
scottreisdorf committed Dec 11, 2018
2 parents e1693f7 + be99aa5 commit 5a3ef51
Show file tree
Hide file tree
Showing 6 changed files with 144 additions and 15 deletions.
91 changes: 90 additions & 1 deletion how-to-guides/JmsProviders.rst
Expand Up @@ -45,6 +45,87 @@ Refer to the ActiveMQ documentation, http://activemq.apache.org/redelivery-polic
..
Dead Letter Queue Strategy
~~~~~~~~~~~~~~~~~~~~~~~~~~
When a queue fails to process a message it will retry based upon the `redelivery strategy` (configured above) and then send the message to the Dead Letter Queue (DLQ).
By default there is an `ActiveMQ.DLQ` queue that all messages will go to.
It is recommened you change this for the Kylo queues so they each go to their respective DLQ.

To do this you need to modify the `activemq.xml` file (/opt/activemq/current/conf/activemq.xml) and add the policy entries to describe the DLQ strategy.
Below is an example:

.. code-block:: xml
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue="thinkbig.feed-manager">
<deadLetterStrategy>
<!--
Use the prefix 'DLQ.' for the destination name, and make
the DLQ a queue rather than a topic
-->
<individualDeadLetterStrategy queuePrefix="DLQ." useQueueForQueueMessages="true"/>
</deadLetterStrategy>
</policyEntry>
<policyEntry queue="thinkbig.provenance-event-stats">
<deadLetterStrategy>
<!--
Use the prefix 'DLQ.' for the destination name, and make
the DLQ a queue rather than a topic
-->
<individualDeadLetterStrategy queuePrefix="DLQ." useQueueForQueueMessages="true"/>
</deadLetterStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
..
In case of an error then the above code block will send the messages to separate DLQ queues (see screenshot below)

|image0|

Refer to the ActiveMQ docs for more info: http://activemq.apache.org/message-redelivery-and-dlq-handling.html

ActiveMQ Cluster
~~~~~~~~~~~~~~~~
If you have ActiveMQ setup as a cluster you need to change the broker URL to support the Active MQ failover syntax.

1. Update the kylo-services/conf/application.properties and change the jms.activemq.broker.url to include the `failover` protocol with the clustered amq urls:

.. code-block:: shell
jms.activemq.broker.url=failover:(tcp://localhost:61616, tcp://other amq url)
..
2. do the same for the nifi /opt/nifi/ext-config/config.properties

.. code-block:: shell
jms.activemq.broker.url=failover:(tcp://localhost:61616, tcp://other amq url)
..
3. make sure the NiFi JMS controller services use the failover protocol with the similar url as above


Refer to the ActiveMQ docs for more information: http://activemq.apache.org/failover-transport-reference.html

Amazon SQS
----------

Expand Down Expand Up @@ -107,6 +188,7 @@ Select ``jms-activemq`` profile and provide ActiveMQ specific configuration prop
..
Refer to the ActiveMQ Cluster above for additional settings if you are running with an ActiveMQ cluster.



Expand Down Expand Up @@ -156,4 +238,11 @@ ConsumeJMS processors with GetSQS processors in following feeds:

- index_schema_service -> Receive Schema Index Request (ConsumeJMS)

- index_text_service -> Receive Index Request (ConsumeJms)
- index_text_service -> Receive Index Request (ConsumeJms)



.. |image0| image:: ../media/jms/jms-dlq.png
:width: 1902px
:height: 1270px
:scale: 15%
30 changes: 27 additions & 3 deletions installation/ManualDeploymentGuide.rst
Expand Up @@ -387,17 +387,41 @@ This section is required for Option 2 above. Skip this section if you followed O
..
2. Copy the /opt/kylo/setup/nifi/config.properties file to the /opt/nifi/ext-config folder.

3. Setup the shared Kylo encryption key:

1. Copy the encryption key file to the folder

3. Change the ownership of the above folder to the same owner that nifi runs under. For example, if nifi runs as the "nifi" user:
.. code-block:: shell
cp /opt/kylo/encrypt.key /opt/nifi/ext-config
..
2. Change the ownership and permissions of the key file to ensure only nifi can read it

.. code-block:: shell
chown nifi /opt/nifi/ext-config/encrypt.key
chmod 400 /opt/nifi/ext-config/encrypt.key
..
3. Edit the ``/opt/nifi/current/bin/nifi-env.sh`` file and add the ENCRYPT_KEY variable with the key value

.. code-block:: shell
export ENCRYPT_KEY="$(< /opt/nifi/ext-config/encrypt.key)"
..
4. Change the ownership of the above folder to the same owner that nifi runs under. For example, if nifi runs as the "nifi" user:

.. code-block:: shell
$ chown -R nifi:users /opt/nifi
..
11. Create an activemq folder to provide JARs required for the JMS processors.

Configure the activemq folder
-----------------------------

Expand Down
8 changes: 4 additions & 4 deletions installation/Mapr6.01KyloInstallation.rst
Expand Up @@ -6,7 +6,7 @@ MapR 6.0.1 Kylo Installation Guide
About
=====

This guide provides an end to end example of installing Kylo 0.9.1.1 on a single MapR 6.0.1 in AWS. Kylo is generally installed
This guide provides an end to end example of installing Kylo on a single MapR 6.0.1 in AWS. Kylo is generally installed
on an edge node in a Hadoop cluster.

Two things are required before installing Kylo
Expand Down Expand Up @@ -83,7 +83,7 @@ Download the Kylo RPM
.. code-block:: console
# Run as root
wget http://bit.ly/2KDX4cy -O /opt/kylo-0.9.1.1.rpm
wget http://bit.ly/2KDX4cy -O /opt/kylo-#.#.#.#.rpm
..
Expand Down Expand Up @@ -115,7 +115,7 @@ Install the Kylo RPM
.. code-block:: console
# Run as root
rpm -ivh kylo-0.9.1.1.rpm
rpm -ivh kylo-#.#.#.#.rpm
..
Expand Down Expand Up @@ -575,4 +575,4 @@ Troubleshooting

.. |Hive Directories Optimization| raw:: html

<a href="https://mapr.com/docs/60/Hive/Config-HiveDirectories.html " target="_blank">optimization</a>
<a href="https://mapr.com/docs/60/Hive/Config-HiveDirectories.html " target="_blank">optimization</a>
Binary file added media/jms/jms-dlq.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 12 additions & 7 deletions release-notes/ReleaseNotes10.0.rst
Expand Up @@ -3,11 +3,11 @@ Release 0.10.0 (November 29, 2018)

Highlights
----------
1. :ref:`Kylo's user interface <new_ui_highlight>` has been redone to make it easier to create and edit feeds. Saving a feed is now separated from deploying to NiFi, allowing for users to save and come back to their feed prior to deploying.
2. :ref:`New Template Manager <repository_highlight>` We’ve added a template manager and repository that enables users to add, update and publish templates through a simple user interface.
1. :ref:`New user interface <new_ui_highlight>`. Improved look-and-feel, performance, and modernized color scheme. Simplified setup guide for creating new feeds.
2. :ref:`Template Manager <repository_highlight>` New template management that enables admins to quickly update when new versions of templates become available or publish templates for the enterprise.
3. :ref:`Wrangler improvements <wrangler_highlight>`. Many new features have been added to the wrangler to make data scientists more productive. Features includes: quick data clean; improved schema manipulation; new data transformations, including imputing new values; column statistics view.
4. :ref:`New Data Catalog <catalog_highlight>`. Kylo allows you to manage and browse data sources in a new data catalog. Kylo ships with the following datasource connectors: Amazon S3, Azure, HDFS, Hive, JDBC, Local Files
5. :ref:`Search on custom properties <search_properties_es_highlight>` defined for feeds and categories, when using Elasticsearch.
4. :ref:`Data Catalog <catalog_highlight>`. New virtual catalog to access remote datasets to wrangle, preview, and feed setup. Kylo 0.10 includes the following connectors: Amazon S3, Azure, HDFS, Hive, JDBC, Local Files, NAS Filesystem
5. :ref:`Search custom properties <search_properties_es_highlight>`. Any custom properties defined for feeds and categories are indexed and available via the global metadata search feature.

Download Links
--------------
Expand Down Expand Up @@ -183,11 +183,16 @@ Upgrade Instructions from v0.9.1
..
Setup the shared Kylo encryption key:
Configure NiFi with Kylo's shared :doc:`../security/EncryptingConfigurationProperties`

1. Copy Kylo's encryption key file (ex: ``/opt/kylo/encrypt.key``) to the NiFi extention config directory ``/opt/nifi/ext-config``
1. Copy the Kylo encryption key file to the NiFi extention config directory

.. code-block:: shell
cp /opt/kylo/encrypt.key /opt/nifi/ext-config
..
2. Change the ownership of that file to the "nifi" user and ensure only nifi can read it
2. Change the ownership and permissions of the key file to ensure only nifi can read it

.. code-block:: shell
Expand Down
11 changes: 11 additions & 0 deletions security/EncryptingConfigurationProperties.rst
@@ -1,3 +1,14 @@
==============
Encryption Key
==============

Kylo uses a encryption key file (`/opt/kylo/encrypt.key`) to both encrypt credentials stored in its metadata store,
and to allow properties in Kylo's configuration files to be encrypted. This same key is shared with NiFi (`/opt/nifi/ext-config/encrypt.key`) so that it can supply that key to the Spark jobs that
it launches; allowing those jobs can decrypt the credentials needed to access their data sources.

The Kylo key file is usually generated automatically during installation. This same key is automatically configured for NiFi during installation when using the :doc:`../installation/SetupWizardDeploymentGuide`.
There are also manual configuration steps to provide this key to NiFi as described in :doc:`../installation/ManualDeploymentGuide`.

===================================
Encrypting Configuration Properties
===================================
Expand Down

0 comments on commit 5a3ef51

Please sign in to comment.