diff --git a/modules/ROOT/images/data-source.png b/modules/ROOT/images/data-source.png index 9426ab93f..ea82889ae 100644 Binary files a/modules/ROOT/images/data-source.png and b/modules/ROOT/images/data-source.png differ diff --git a/modules/ROOT/images/sources.png b/modules/ROOT/images/sources.png index 90e7300dd..7f4a81321 100644 Binary files a/modules/ROOT/images/sources.png and b/modules/ROOT/images/sources.png differ diff --git a/modules/ROOT/pages/import/file-provision.adoc b/modules/ROOT/pages/import/file-provision.adoc index 9b15e28e6..cada4e7c0 100644 --- a/modules/ROOT/pages/import/file-provision.adoc +++ b/modules/ROOT/pages/import/file-provision.adoc @@ -19,7 +19,12 @@ When you use the *New data source* button, you are presented with the following * MySQL * SQL Server * Oracle +* BigQuery +* Databricks * Snowflake +* AWS S3 +* Azure Blogs & Data Lake Storage +* Google Cloud Storage Regardless of which one you select, you are required to provide roughly the same information to allow the Import service to load the tables for you from your remote source. diff --git a/modules/ROOT/pages/import/indexes-and-constraints.adoc b/modules/ROOT/pages/import/indexes-and-constraints.adoc index 480bfb262..fd32c17cf 100644 --- a/modules/ROOT/pages/import/indexes-and-constraints.adoc +++ b/modules/ROOT/pages/import/indexes-and-constraints.adoc @@ -5,6 +5,7 @@ Import supports adding indexes to improve read performance of queries and creates constraints to ensure the accuracy of data. They are found in the details panel and the tab is visible when a single node is selected in the data model panel. +[.shadow] image::constraints-tab.png[] Once a node is mapped to a file and a property is selected to serve as its ID, both a constraint and an index are created automatically. diff --git a/modules/ROOT/pages/import/introduction.adoc b/modules/ROOT/pages/import/introduction.adoc index bb81d8581..c7bd8c8fe 100644 --- a/modules/ROOT/pages/import/introduction.adoc +++ b/modules/ROOT/pages/import/introduction.adoc @@ -9,7 +9,12 @@ It allows you to import data without using any code from: * MySQL * SQL Server * Oracle +* Big Query +* Databricks * Snowflake +* AWS S3 +* Azure Blobs & Data Lake Storage +* Google Cloud Storage * .CSV This service is also available as a standalone tool for importing data into self-managed Neo4j instances. diff --git a/modules/ROOT/pages/import/mapping.adoc b/modules/ROOT/pages/import/mapping.adoc index c3640d52e..f4087981e 100644 --- a/modules/ROOT/pages/import/mapping.adoc +++ b/modules/ROOT/pages/import/mapping.adoc @@ -95,9 +95,9 @@ By default, Import excludes any rows where the value of the node ID column is em The node exclude list is available from the more menu (`...`) in the data model panel, under _Settings_. +[.shadow] image::node-exclude.png[width=400] - == Complete the mapping If the mapping is not complete, ie. if any element in the model is missing the green checkmark, the import can't be run. diff --git a/modules/ROOT/pages/import/quick-start.adoc b/modules/ROOT/pages/import/quick-start.adoc index 0608bf39f..24b75f3a7 100644 --- a/modules/ROOT/pages/import/quick-start.adoc +++ b/modules/ROOT/pages/import/quick-start.adoc @@ -8,15 +8,17 @@ These reflect the three stages of importing data; provide the data, i.e., config If you haven't previously imported any data, all three are empty, otherwise sources, models, and import jobs are listed here. [.shadow] +.Connect data source image::data-source.png[width=800] == Provide the data To get started you need to connect to a data source. -Import supports PostgreSQL, MySQL, SQL Server, as well as locally hosted flat files. +Import supports PostgreSQL, MySQL, SQL Server, Oracle, BigQuery, Databricks, Snowflake, AWS S3, Azure Blobs & Data Lake Storage, Google Cloud Storage, as well as locally hosted flat files. [.shadow] -image::sources.png[width=400] +.Supported data sources +image::sources.png[width=500] For relational databases and cloud data warehouses, you need to give the data source a name, configure the data source, and add user credentials for the database account. The data source configuration is essentially the same for both relational databases and data warehouses; you specify a *host* for your database, a *port* to connect to, the name of the *database*/*service*, and a *schema* that contains your tables (except for MySQL data sources).