diff --git a/symmetric-assemble/src/asciidoc/appendix/databases.ad b/symmetric-assemble/src/asciidoc/appendix/databases.ad index 123375e2cb..76836de9bd 100644 --- a/symmetric-assemble/src/asciidoc/appendix/databases.ad +++ b/symmetric-assemble/src/asciidoc/appendix/databases.ad @@ -554,6 +554,7 @@ include::mariadb.ad[] include::mongodb.ad[] include::mssqlserver.ad[] include::mysql.ad[] +include::opensearch.ad[] include::oracle.ad[] include::postgresql.ad[] include::redshift.ad[] diff --git a/symmetric-assemble/src/asciidoc/appendix/elasticsearch.ad b/symmetric-assemble/src/asciidoc/appendix/elasticsearch.ad index 135d04d8a8..56d92e785f 100644 --- a/symmetric-assemble/src/asciidoc/appendix/elasticsearch.ad +++ b/symmetric-assemble/src/asciidoc/appendix/elasticsearch.ad @@ -1,11 +1,12 @@ ifndef::pro[] + === Elasticsearch Use `symadmin module install elasticsearch` to install driver files, or copy your own files into the `lib` sub-directory. Send changes from your relational database to Elasticsearch in a variety of formats. An Elasticsearch node can be setup as a <> to receive changes from another node that is capturing changes. -Setup the Elasticsearch node by using the <> wizard and selecting Elasticsearch as the type. The URL will be the connection point to Elasticsearch. User and password are not needed (or used). +Setup the Elasticsearch node by using the <> wizard and selecting Elasticsearch as the type. The URL will be the connection point to Elasticsearch. If your Elasticsearch database has security enabled, please enter your username and password. image::appendix/elasticsearch-node-setup.png[] @@ -13,16 +14,12 @@ After hitting next you can setup advanced options for your Elasticsearch node. image::appendix/elasticsearch-advanced-settings.png[] -You can also use an AWS Elasticsearch instance by providing the required information. - -image::appendix/elasticsearch-advanced-settings-aws.png[] - ==== Loading Data Into Elasticsearch ===== Setup reload channels for bulk loading. -Update any reload channels that will be used on the table triggers that will capture changes and send them to Elastic Search by setting the column data_loader_type to 'bulk'. It is also recommended to increase the batch size so that larger CSV files will be processed instead of the default size on reloads of 10,000 rows. +Update any reload channels that will be used on the table triggers that will capture changes and send them to Elasticsearch by setting the column data_loader_type to 'bulk'. It is also recommended to increase the batch size so that larger CSV files will be processed instead of the default size on reloads of 10,000 rows. endif::pro[]