Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified modules/ROOT/images/data-source.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified modules/ROOT/images/sources.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 5 additions & 0 deletions modules/ROOT/pages/import/file-provision.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,12 @@ When you use the *New data source* button, you are presented with the following
* MySQL
* SQL Server
* Oracle
* BigQuery
* Databricks
* Snowflake
* AWS S3
* Azure Blogs & Data Lake Storage
* Google Cloud Storage

Regardless of which one you select, you are required to provide roughly the same information to allow the Import service to load the tables for you from your remote source.

Expand Down
1 change: 1 addition & 0 deletions modules/ROOT/pages/import/indexes-and-constraints.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
Import supports adding indexes to improve read performance of queries and creates constraints to ensure the accuracy of data.
They are found in the details panel and the tab is visible when a single node is selected in the data model panel.

[.shadow]
image::constraints-tab.png[]

Once a node is mapped to a file and a property is selected to serve as its ID, both a constraint and an index are created automatically.
Expand Down
5 changes: 5 additions & 0 deletions modules/ROOT/pages/import/introduction.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,12 @@ It allows you to import data without using any code from:
* MySQL
* SQL Server
* Oracle
* Big Query
* Databricks
* Snowflake
* AWS S3
* Azure Blobs & Data Lake Storage
* Google Cloud Storage
* .CSV

This service is also available as a standalone tool for importing data into self-managed Neo4j instances.
Expand Down
2 changes: 1 addition & 1 deletion modules/ROOT/pages/import/mapping.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,9 @@ By default, Import excludes any rows where the value of the node ID column is em

The node exclude list is available from the more menu (`...`) in the data model panel, under _Settings_.

[.shadow]
image::node-exclude.png[width=400]


== Complete the mapping

If the mapping is not complete, ie. if any element in the model is missing the green checkmark, the import can't be run.
Expand Down
6 changes: 4 additions & 2 deletions modules/ROOT/pages/import/quick-start.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,17 @@ These reflect the three stages of importing data; provide the data, i.e., config
If you haven't previously imported any data, all three are empty, otherwise sources, models, and import jobs are listed here.

[.shadow]
.Connect data source
image::data-source.png[width=800]

== Provide the data

To get started you need to connect to a data source.
Import supports PostgreSQL, MySQL, SQL Server, as well as locally hosted flat files.
Import supports PostgreSQL, MySQL, SQL Server, Oracle, BigQuery, Databricks, Snowflake, AWS S3, Azure Blobs & Data Lake Storage, Google Cloud Storage, as well as locally hosted flat files.

[.shadow]
image::sources.png[width=400]
.Supported data sources
image::sources.png[width=500]

For relational databases and cloud data warehouses, you need to give the data source a name, configure the data source, and add user credentials for the database account.
The data source configuration is essentially the same for both relational databases and data warehouses; you specify a *host* for your database, a *port* to connect to, the name of the *database*/*service*, and a *schema* that contains your tables (except for MySQL data sources).
Expand Down