diff --git a/modules/ROOT/content-nav.adoc b/modules/ROOT/content-nav.adoc index 975d6d0f1..2df507dda 100644 --- a/modules/ROOT/content-nav.adoc +++ b/modules/ROOT/content-nav.adoc @@ -125,6 +125,17 @@ Generic Start ** xref:api/overview.adoc[] ** xref:api/authentication.adoc[] ** link:{neo4j-docs-base-uri}/aura/platform/api/specification/[API Specification] + +* *Tutorials* +* Upgrade and migration +** xref:tutorials/migration-free.adoc[] +* Integrating with Neo4j Connectors +** xref:tutorials/spark.adoc[] +** xref:tutorials/bi.adoc[] +* xref:tutorials/performance-improvements.adoc[] +* xref:tutorials/troubleshooting.adoc[] +* xref:tutorials/create-auradb-instance-from-terminal.adoc[] + //// AuraDB End //// diff --git a/modules/ROOT/pages/apoc.adoc b/modules/ROOT/pages/apoc.adoc index 57c175969..3edf59de5 100644 --- a/modules/ROOT/pages/apoc.adoc +++ b/modules/ROOT/pages/apoc.adoc @@ -1,6 +1,6 @@ [[aura-apoc-support]] = APOC support -:descrription: This page lists supported APOC procedures in Neo4j Aura. +:description: This page lists supported APOC procedures in Neo4j Aura. :page-aliases: platform/apoc.adoc APOC (Awesome Procedures on Cypher) is a Neo4j library that provides access to additional procedures and functions, extending the use of the Cypher query language. For more information on APOC, see https://neo4j.com/docs/apoc/[the APOC documentation]. diff --git a/modules/ROOT/pages/tutorials/bi.adoc b/modules/ROOT/pages/tutorials/bi.adoc new file mode 100644 index 000000000..617584931 --- /dev/null +++ b/modules/ROOT/pages/tutorials/bi.adoc @@ -0,0 +1,88 @@ += Using the Neo4j BI Connector +:page-aliases:platform/tutorials/bi.adoc + +In this tutorial we use the Neo4j Connector for BI to read graph data from an Aura instance using some common <<_using_command_line_sql_clients,SQL clients>> and <<_using_bi_tools,BI tools>>. + +[CAUTION] +==== +This tutorial includes instructions on the usage of third-party software, which may be subject to changes beyond our control. +In case of doubt, refer to the third-party software documentation. +==== + +== Downloading the connector + +Download the connector from the https://neo4j.com/download-center/#integrations[Download Center]. +Depending on the SQL client or BI tool it will be used with, you will need either the JDBC or the ODBC connector; see the usage examples for further details. + +== Preparing example data + +Before trying the connector with any of the listed tools, some data needs to be loaded on Aura. +This can be achieved by running the following Cypher query in the Neo4j Browser: + +[source, cypher, subs=attributes+, role=noplay] +---- +CREATE + (john:Person {name: "John", surname: "Doe", age: 42}), + (jane:Person {name: "Jane", surname: "Doe", age: 40}), + (john)-[:KNOWS]->(jane) +---- + +== Using BI tools + +Commonly used BI tools include <<_tableau>> (which uses the JDBC driver) and <<_power_bi>> (which uses the ODBC driver). + +[TIP] +==== +When connecting with a JDBC driver, the `neo4j+s` URI scheme must be changed into `neo4j` and the `SSL=true` parameter must be added to the URL. +==== + +=== Tableau + +[NOTE] +==== +This example requires https://www.tableau.com/en-gb/products/desktop[Tableau Desktop]. + +Refer to the link:https://help.tableau.com/current/pro/desktop/en-us/examples_otherdatabases_jdbc.htm[Tableau documentation] for more information on how to add a JDBC database. +==== + +After downloading the JDBC Neo4j Connector for BI from the https://neo4j.com/download-center/#integrations[Download Center]: + +. Close any running instances of Tableau Desktop. +. Copy the Neo4j driver JAR file into the appropriate Tableau `Drivers` folder. +* Use `C:\Program Files\Tableau\Drivers` on Windows. +* Use `~/Library/Tableau/Drivers` on macOS. +If the folder is not visible, select `Go -> Go to Folder` in Finder to open the folder manually. + +. Start Tableau and search for the `Other Databases (JDBC)` option. +. Insert the Aura URL as `jdbc:neo4j://xxxxxxxx.databases.neo4j.io?SSL=true`, leave the SQL dialect as `SQL92`, and complete the relevant credentials. + +After the connection is established, you can select the `neo4j` database and the `Node` schema to find the `Person` table. +You can then explore the table to find the example data. + +==== Troubleshooting + +If the connection fails with a `Generic JDBC connection error`, check if you installed the Neo4j driver in the correct location and then: + +* Download the `SSL.com` root certificates as explained on link:https://www.ssl.com/how-to/install-ssl-com-ca-root-certificates/[ssl.com] and install them as shown in the link:https://help.tableau.com/current/pro/desktop/en-us/jdbc_ssl_config.htm[Tableau documentation], then restart Tableau and repeat the previous steps (recommended option). +* Add `&sslTrustStrategy=TRUST_ALL_CERTIFICATES` to the connection string (after `SSL=true`) and try to connect again. +**This option requires caution and should not be used in a production environment**. + +=== Power BI + +[NOTE] +==== +This example requires Microsoft Windows and https://powerbi.microsoft.com/en-us/desktop/[Power BI Desktop]. + +Refer to the link:https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-connect-using-generic-interfaces[Power BI documentation] for more information on how to add an ODBC database. +==== + +After downloading and installing the ODBC Neo4j Connector for BI from the https://neo4j.com/download-center/#integrations[Download Center]: + +. Open Power BI Desktop. +. Search for `ODBC` in the *Get data from another source* panel. +. Select `Simba Neo4j` in the *DSN dropdown* menu. +. Insert the connection string `Host=xxxxxxxx.databases.neo4j.io;SSL=1` in the *Advanced options* section. +. Insert your username and password. + +Once connected, open sequentially `ODBC` -> `neo4j` -> `Node` -> `Person` in the *Navigator* window to see a preview of the table. + diff --git a/modules/ROOT/pages/tutorials/create-auradb-instance-from-terminal.adoc b/modules/ROOT/pages/tutorials/create-auradb-instance-from-terminal.adoc new file mode 100644 index 000000000..95f891379 --- /dev/null +++ b/modules/ROOT/pages/tutorials/create-auradb-instance-from-terminal.adoc @@ -0,0 +1,114 @@ +[[create-auradb-instance-in-terminal]] += Create an AuraDB instance in the terminal +:description: This tutorial describes using the terminal to create an instance in the Aura Console. +:page-aliases:platform/tutorials/create-auradb-instance-from-terminal.adoc + +This tutorial describes using the terminal to create an instance in the Aura Console. + +== Preparation + +=== Generate API credentials + +* Log in to the Aura Console. +* Click your email address in the top right corner and select *Account details*. +* In the *API credentials* section, select *Create*. +Enter a descriptive name and save the generated Client ID and Client Secret. + +=== cURL +* Install cURL via your terminal +* For macOS with Homebrew: use `brew install curl`. +* Install cURL. +See link:https://curl.se/dlwiz/[curl download wizard] for more information. +* Check cURL is available: Type `curl -V` in the terminal + +== Obtain a bearer token + +[NOTE] +==== +Bearer tokens are valid for one hour. +==== + +In the terminal paste the snippet, replacing `YOUR_CLIENT_ID` and `YOUR_CLIENT_SECRET` with the values generated by the Aura Console. +Keep the `:` between the values. + +[source, cURL] +---- +curl --location 'https://api.neo4j.io/oauth/token' --header 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'grant_type=client_credentials' -u 'YOUR_CLIENT_ID:YOUR_CLIENT_SECRET' -v +---- + +=== Response body example + +Save the `access_token` from the end of the returned code. +This is your bearer token. +It looks similar to this example: + +[source, cURL] +---- +"access_token":"eyJ1c3IiOiJkNzI2MzE1My03MWZmLTUxMjQtOWVjYy1lOGFlM2FjNjNjZWUiLCJpc3MiOiJodHRwczovL2F1cmEtYXBpLmV1LmF1dGgwLmNvbS8iLCJzdWIiOiJFSDdsRTgwbEhWQVVkbDVHUUpEY0M1VDdxZ3BNTnpqVkBjbGllbnRzIiwiYXVkIjoiaHR0cHM6Ly9jb25zb2xlLm5lbzRqLmlvIiwiaWF0IjoxNzAyOTgzODQzLCJleHAiOjE3MDI5ODc0NDMsImF6cCI6IkVIN2xFODBsSFZBVWRsNUdRSkRjQzVUN3FncE1OempWIiwiZ3R5IjoiY2xpZW50LWNyZWRlbnRpYWxzIn0eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6ImFKbWhtUTlYeExsQmFLdHNuZnJIcCJ9..jkpatG4SCRnxwTPzfEcSJk3Yyd0u_NMH8epNqmSBMUlp_JvvqbKpNdkPIE6vx5hLRgVCVKovxl4KY9yzEkr7R5s4YU3s2K25eNB1q1y3yQ_-9N0e6eOhmjIrsWHMd_rl2NuGIHo6pHihumuJlEg-U2ELkWyu8Iz3zQxjycVnPHzlbu7sbtwVJdU7UzgO12jgDLA1T4mUqvxdAAdnoXO57SwczYoYKY2YL61CMTn-xdQ6MFS8A3vwpGQbRirwVVxvEmoIPCLlQwHeEC4_modJ4cifmjt6ChJb1sxsRpFvdNHm0vNcLjy-96e88D50AMgjvS4VQCmVKA7kUgt7t5IpKg","expires_in":3600,"token_type":"Bearer" +---- + +== Obtain the project ID + +Use cURL to obtain the project ID with your token. +Replace `YOUR_BEARER_TOKEN` with your token. + +[source, cURL] +---- +curl --location 'https://api.neo4j.io/v1/projects' --header 'Accept: application/json' --header 'Authorization: Bearer YOUR_BEARER_TOKEN' +---- + +This returns something similar to: + +[source, cURL] +---- +{"data":[{"id":"6e6bbbe2-5678-5f8a-1234-b1f62f08b98f","name":"team1"},{"id":"ad69ee24-1234-5678-af02-ff8d3cc23611","name":"team2"}]} +---- + +In the example response above, two projects are returned. +If you're a member of multiple projects, select the one you wish to use. + +[NOTE] +==== +_Project_ replaces _Tenant_ in the console UI and documentation. +However, in the API, `tenant` remains the nomenclature. +==== + +== Configure an AuraDB instance + +=== Configure the instance values + +Use the bearer token and Project ID to create the Aura instance. +Replace `YOUR_BEARER_TOKEN` with your token. +Replace `YOUR_PROJECT_ID` with your project ID. + +The following values are customizable `version`, `region`, `memory`, `name`, `type`, `tenant_id`, and `cloud_provider`. + + +[source, cURL] +---- +curl --location 'https://api.neo4j.io/v1/instances' --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Authorization: Bearer YOUR_BEARER_TOKEN' --data ' { "version": "5", "region": "europe-west1", "memory": "8GB", "name": "instance01", "type": "enterprise-db", "tenant_id": "YOUR_PROJECT_ID", "cloud_provider": "gcp" }' +---- +See xref:api/overview.adoc[Aura API documentation] for more details. + +[CAUTION] +==== +The legacy term `Enterprise` is still used within the codebase and API. +However, in the Aura console and documentation, the AuraDB Enterprise project type is now known as AuraDB Virtual Dedicated Cloud. +==== + +At this point, an Aura instance is provisioned in the Aura Console. +Optionally, use this code in the terminal to check the status: + +[source, cURL] +---- +curl --location 'https://api.neo4j.io/v1/instances/YOUR_INSTANCE_ID' --header 'Accept: application/json' --header 'Authorization: Bearer YOUR_BEARER_TOKEN' +---- + +*Response* + +[source, cURL] +---- +curl --location 'https://api.neo4j.io/v1/instances/YOUR_INSTANCE_ID' --header 'Accept: application/json' --header 'Authorization: Bearer YOUR_BEARER_TOKEN' +---- + +If the value of `status` shows `running`, you can start using the new Aura instance. diff --git a/modules/ROOT/pages/tutorials/migration-free.adoc b/modules/ROOT/pages/tutorials/migration-free.adoc new file mode 100644 index 000000000..733bd7137 --- /dev/null +++ b/modules/ROOT/pages/tutorials/migration-free.adoc @@ -0,0 +1,89 @@ += Migrating your Neo4j AuraDB Free instance to another AuraDB plan +:description: This section describes migrating your Neo4j AuraDB Free Instance to another AuraDB plan +:page-aliases:platform/tutorials/migration-free.adoc + +== AuraDB Professional or AuraDB Virtual Dedicated Cloud + +Upgrading your plan to AuraDB Professional or AuraDB Virtual Dedicated Cloud gives you access to additional resources and functionalities to support production workloads and applications with demanding storage and processing needs. + +*Migration options* + +* Upgrade to AuraDB Professional +* Clone to new (Works for AuraDB Professional and AuraDS Professional) +* Manual process + +== Upgrade to AuraDB Professional + +You can upgrade an instance to the Professional plan directly from the console. + +On the instance card, click the button that says *Upgrade*. + +Verify that the cloud provider and region are correct and select the instance size you need, add the extra details and select *Create*. + +== Clone (Works for AuraDB Professional and AuraDS) + +The other way is to _clone_ your existing instance to the Professional plan. + +* Click the three dots (*...*) on an instance card +* Select either: *Clone to new instance* or create a new *Clone to existing instance* (the current content will be overwritten) + +== Manual process + +In your existing instance: + +. (Optional but recommended) Capture existing index and constraint definitions: +.. Run the following Cypher statement: ++ +[source,cypher] +---- +SHOW CONSTRAINTS YIELD createStatement +---- ++ +Save result to a file, to use later in the process. +.. Run the following Cypher statement: ++ +[source,cypher] +---- +SHOW INDEXES YIELD createStatement +---- ++ +Save result to a file, to use later in the process. + +. (Optional but recommended) Drop the indexes and constraints. +.. Run the following Cypher statement to generate the commands to drop existing constraints: ++ +[source,cypher] +---- +SHOW CONSTRAINTS YIELD name +RETURN 'DROP CONSTRAINT ' + name + ';' +---- +.. Execute the generated commands to drop existing constraints. +.. Run the following Cypher statement to generate the commands to drop existing indexes: ++ +[source,cypher] +---- +SHOW INDEX YIELD name +RETURN 'DROP INDEX ' + name + ';' +---- +.. Execute the generated commands to drop existing indexes. ++ +For more information about indexes and constrains, see link:{neo4j-docs-base-uri}/cypher-manual/current/indexes/[Cypher Manual -> Indexes] and link:{neo4j-docs-base-uri}/cypher-manual/current/constraints/[Cypher Manual -> Constraints]. ++ +. In the console of your existing instance (AuraDB Free), do the following: + +.. Download snapshot/Dump locally (the daily automatic snapshot) +.. In the Aura Console select the AuraDB instance +.. Go to the *Snapshots* tab +.. Click the *three dots*, and select *Export* +.. Save the dump file locally (preserve the .backup extension) ++ +. Then create a new AuraDB instance in AuraDB Professional or AuraDB Virtual Dedicated Cloud with the right resource sizing. +From your new instance, do the following: + +.. Go to the instance card +.. Select *Backup & Restore* +.. Upload your .backup file ++ +. In the newly created AuraDB Professional or AuraDB Virtual Dedicated Cloud instance ++ +(Optional) Once the AuraDB instance is loaded and started, you can recreate the indexes and constraints, using the information captured earlier in the process. \ No newline at end of file diff --git a/modules/ROOT/pages/tutorials/performance-improvements.adoc b/modules/ROOT/pages/tutorials/performance-improvements.adoc new file mode 100644 index 000000000..fd0eac8e3 --- /dev/null +++ b/modules/ROOT/pages/tutorials/performance-improvements.adoc @@ -0,0 +1,116 @@ +[[aura-performance]] += Improving Cypher performance +:page-aliases:platform/tutorials/performance-improvements.adoc + +This page covers a number of steps you can take to improve the Cypher performance of your workload. + +== Cypher statements with literal values + +One of the main causes of poor query performance is due to running many Cypher statements with literal values. +This leads to inefficient Cypher processing as there is currently no use of parameters. +As a result, you don't benefit fully from the execution plan cache that would occur otherwise. + +The following Cypher queries are identical in form but use different literals: + +[source, cypher, role=noplay] +---- +MATCH (tg:asset) WHERE tg.name = "ABC123" +MERGE (tg)<-[:TAG_OF]-(z1:tag {name: "/DATA01/" + tg.name + "/Top_DOOR"}) +MERGE (tg)<-[:TAG_OF]-(z2:tag {name: "/DATA01/" + tg.name + "/Data_Vault"}) +---- + +In cases like this, query parsing and execution plan generation happen multiple times, resulting in a loss of efficiency. +One way to solve that is by rewriting the former example as follows: + +[source, cypher, role=noplay] +---- +MATCH (tg:asset) WHERE tg.name = $tgName +WITH tg +UNWIND $tags as tag +MERGE (tg)<-[:TAG_OF]-(:tag {name: tag.name}) +---- + +By replacing the literal values in the queries with parameters you get a better execution plan caching reuse. +Your application needs to place all the values in a parameter list and then you can issue one statement that iterates through them. +Making these changes will lead to improvements in execution and memory usage. + +== Review queries and model + +One first action that you can take is reviewing and listing all your Cypher queries. +The best starting point is to have a good understanding of the sequence and frequency of the Cypher queries submitted. + +Additionally, if the queries are generated by a framework, it is essential to log them in Cypher form to make reviewing easier. + +You can also profile a Cypher query by prepending it with `EXPLAIN` (to see the execution plan without running the query) or `PROFILE` (to run and profile the query). +Read more about link:{neo4j-docs-base-uri}/cypher-manual/current/query-tuning/#how-do-i-profile-a-query[profiling a query]. + +[NOTE] +==== +When using `PROFILE` you may need to run it multiple times in order to get the optimal value. +The first time the query runs, it gets a full cycle of evaluation, planning, and interpreting before making its way into the query cache. +Once in the cache, the subsequent execution time will improve. +Furthermore, always use parameters instead of literal values to benefit from the cache. +==== + +Read more about link:{neo4j-docs-base-uri}/cypher-manual/current/execution-plans/[execution plans] and see a detailed guide for the steps on link:https://support.neo4j.com/s/article/4404022359443-Performance-tuning-with-Neo4j-AuraDB[how to capture the execution plans]. + +To best interpret the output of your execution plan, it is recommended that you get familiar with the terms used on it. +See link:{neo4j-docs-base-uri}/cypher-manual/current/execution-plans/operator-summary/[this summary of execution plan operators] for more information. + +== Index specification + +As your data volume grows, it is important to define constraints and indexes in order to achieve the best performance for your queries. +For that, the runtime engine will need to evaluate the cost associated with a query and, to get the best estimations, it will rely on already existing indexes. +This will likely show whether an index is missing from the execution plan and which one is it. +Though in some circumstances it might look like an index is not available or possible, it may also make sense to reconsider the model and create an intermediate node or another relationship type just to leverage it. + +Read more about link:{neo4j-docs-base-uri}/cypher-manual/current/query-tuning/indexes/[the use of indexes] for a more comprehensive explanation. + +[NOTE] +==== +You can also fine-tune the usage of an index in your query by leveraging it with the link:{neo4j-docs-base-uri}/cypher-manual/current/query-tuning/using/[`USING`] clause. +==== + +== Review metrics and instance size + +With Aura, you can keep an eye on some key metrics to see which resource constraints your instance may be experiencing. +Follow the steps described in link:{neo4j-docs-base-uri}/aura/auradb/managing-databases/monitoring/[Monitoring] to check that information. + +At this stage, if the key metrics are too high, you may want to reconsider the instance sizing. +A resize operation does not cause any downtime, and you would only pay for what you use. + +[TIP] +==== +You should always size your instance against your workload activity peaks. +==== + +== Consider concurrency + +Sometimes individual queries may be optimized on their own and run fine, but the sheer volume and concurrency of operations can overwhelm your Aura instance. + +To review what is running at any given time (this makes particular sense if you have a long-running query), you can use these statements and list what is running: + +* link:{neo4j-docs-base-uri}/cypher-manual/current/clauses/transaction-clauses/#query-listing-transactions[`SHOW TRANSACTIONS`] +* link:{neo4j-docs-base-uri}/operations-manual/current/reference/procedures/#procedure_dbms_listqueries[`CALL dbms.listQueries()`] + +== Runtime engine and Cypher version + +The execution plan should show you the runtime that is selected for the execution of your query. +Usually, the planner makes the right decision, but it may be worth checking at times if any other runtime performs better. +Read more about link:{neo4j-docs-base-uri}/cypher-manual/current/query-tuning/#cypher-runtime[query tuning] on Cypher runtime. + +To invoke the use of a given runtime forcibly, prepend your Cypher statement with: + +* `CYPHER runtime=pipelined` for `pipelined` runtime +* `CYPHER runtime=slotted` for `slotted` runtime +* `CYPHER runtime=interpreted` for `interpreted` runtime + +If you have a Cypher pattern that is not performing without error, it could as well be running on a prior Cypher version. +You can control the version used to interpret your queries by using these link:{neo4j-docs-base-uri}/cypher-manual/current/query-tuning/#cypher-version[Cypher query options]. + +== Network and the cost of the round-trip + +With Aura, it is essential to consider the best cloud in your region as the physical distance is a direct factor in the achievable network latency. + +When some event causes any network disruption between your application and Aura, you would be affected by round-trip network latency to re-submit a query. +With Aura, this is particularly important because you will need to be using transaction functions when link:{neo4j-docs-base-uri}/aura/auradb/connecting-applications/overview/[connecting your instance to applications]. diff --git a/modules/ROOT/pages/tutorials/spark.adoc b/modules/ROOT/pages/tutorials/spark.adoc new file mode 100644 index 000000000..8bd911e2d --- /dev/null +++ b/modules/ROOT/pages/tutorials/spark.adoc @@ -0,0 +1,92 @@ += Using the Neo4j Connector for Apache Spark +:product: Aura +:page-aliases:platform/tutorials/spark.adoc + +This tutorial shows how to use the Neo4j Connector for Apache Spark to write to and read data from an Aura instance. + +== Setup + +. Download link:https://spark.apache.org/downloads.html[Apache Spark^]. ++ +_Example: Spark 3.4.1, pre-built for Apache Hadoop 3.3 and later with Scala 2.12._ + +. Download the link:https://github.com/neo4j-contrib/neo4j-spark-connector/releases[Neo4j Connector JAR file^], making sure to match both the Spark version and the Scala version. ++ +_Example: Neo4j Connector 5.1.0, built for Spark 3.x with Scala 2.12._ + +. Decompress the Spark file and launch the Spark shell as in the following example: ++ +[source, shell] +---- +$ spark-3.4.1-bin-hadoop3/bin/spark-shell --jars neo4j-connector-apache-spark_2.12-5.1.0_for_spark_3.jar +---- + +== Running code in Apache Spark + +[TIP] +==== +You can copy-paste Scala code in the Spark shell by activating the `paste` mode with the `:paste` command. +==== + +Create a Spark session and set the Aura connection credentials: + +[source, scala] +---- +import org.apache.spark.sql.{SaveMode, SparkSession} + +val spark = SparkSession.builder().getOrCreate() + +// Replace with the actual connection URI and credentials +val url = "neo4j+s://xxxxxxxx.databases.neo4j.io" +val username = "neo4j" +val password = "" +---- + +Then, create the `Person` class and a Spark `Dataset` with some example data: + +[source, scala] +---- +case class Person(name: String, surname: String, age: Int) + +// Create example Dataset +val ds = Seq( + Person("John", "Doe", 42), + Person("Jane", "Doe", 40) +).toDS() +---- + +Write the data to Aura: + +[source, scala] +---- +// Write to Neo4j +ds.write.format("org.neo4j.spark.DataSource") + .mode(SaveMode.Overwrite) + .option("url", url) + .option("authentication.basic.username", username) + .option("authentication.basic.password", password) + .option("labels", ":Person") + .option("node.keys", "name,surname") + .save() +---- + +You can then query and visualize the data using the Neo4j Browser. + +You can also read the data back from Aura: + +[source, scala] +---- +// Read from Neo4j +val data = spark.read.format("org.neo4j.spark.DataSource") + .option("url", url) + .option("authentication.basic.username", username) + .option("authentication.basic.password", password) + .option("labels", "Person") + .load() + +// Visualize the data as a table +data.show() +---- + +For further information on how to use the connector, read the link:{neo4j-docs-base-uri}/spark/[Neo4j Spark Connector docs]. + diff --git a/modules/ROOT/pages/tutorials/troubleshooting.adoc b/modules/ROOT/pages/tutorials/troubleshooting.adoc new file mode 100644 index 000000000..e5b5da0f4 --- /dev/null +++ b/modules/ROOT/pages/tutorials/troubleshooting.adoc @@ -0,0 +1,176 @@ +[[aura-troubleshooting]] += Troubleshooting +:description: Troubleshooting information that can help you diagnose and correct problems. +:page-aliases:platform/tutorials/troubleshooting.adoc + +This page provides possible solutions to several common issues you may encounter when using Neo4j Aura. + +Regardless of the issue, viewing the xref:platform/logging/download-logs.adoc[Aura query log] is always recommended to monitor processes and verify any problems. + +== Query performance + +=== `MemoryLimitExceededException` + +During regular operations of your Aura instance, you may at times see that some of your queries fail with the error: + +.MemoryLimitExceededException error +[source, shell, role=nocopy wrap] +---- +org.neo4j.memory.MemoryLimitExceededException: The allocation of an extra 8.3 MiB would use more than the limit 278.0 MiB. +Currently using 275.1 MiB. dbms.memory.transaction.global_max_size threshold reached +---- + +The `org.neo4j.memory.MemoryLimitExceededException` configuration acts as a safeguard, limiting the quantity of memory allocated to all transactions while preserving the regular operations of the Aura instance. +Similarly, the property `dbms.memory.transaction.global_max_size` also aims to protect the Aura Instance from experiencing any OOM (Out of memory) exceptions and increase resiliency. +It is enabled in Aura and cannot be disabled. + +However, the measured heap usage of all transactions is only an estimate and may differ from the actual number. +The estimation algorithm relies on a conservative approach, which can lead to overestimation of memory usage. +In such cases, all contributing objects' identities are unknown and cannot be assumed to be shared. + +Solution:: + +[NOTE] +==== +We recommend handling this error in your application code, as it may be intermittent. +==== + +Overestimation is most likely to happen when using `UNWIND` on long lists or when expanding a variable length or shortest path pattern. +The many relationships shared between the computed result paths could be the cause of a lack of precision in the estimation algorithm. + +To avoid this scenario, try running the same query without using a sorting operation like `ORDER BY` or `DISTINCT`. +Additionally, if possible, handle this ordering or uniqueness in your application. + +If removing the `ORDER BY` or `DISTINCT` clauses does not solve the issue, the primary mitigation for this error is to perform one or more of these actions: + +* Handle this exception in your code and be prepared to retry if this is an intermittent error. +Keep in mind that the query can succeed regardless. ++ +* Rework the relevant query to optimize it. +** Use `EXPLAIN` or `PROFILE` to review the plans (see more about link:https://neo4j.com/docs/cypher-manual/current/query-tuning/[query tuning]). +** Use `PROFILE` in the Cypher Shell to check the overall memory footprint of a query. +The output will include memory consumption information, the query's result, if any, and the execution plan. +In the following example, the memory consumed is 11,080 Bytes: ++ +image::planSummary.png["Plan summary"] + +* Increase the instance size of your Aura deployment to get more resources. +* Reduce the concurrency of queries heavy on resources to get a better chance of success. + +[NOTE] +==== +If this error occurs while loading data from CSV files, use `apoc.periodic.iterate` to import the data and use a relatively small number for the `batch_size` parameter. +For more information, visit the link:https://support.neo4j.com/s/article/1500012376402-Using-apoc-to-conditional-loading-large-scale-data-set-from-JSON-or-CSV-files[Customer Support Portal]. +==== + +See link:https://neo4j.com/docs/operations-manual/current/performance/memory-configuration/#memory-configuration-considerations[Considerations on memory configuration] for further reading on memory management. + +== Neo4j Admin database upload errors + +The `database upload` command was introduced in Neo4j Admin version 5, replacing the `push-to-cloud` command that was present in Neo4j Admin version 4.4 and earlier. +The following solutions are relevant to both commands. + +=== `LegacyIndexes` + +When attempting to use `database upload` where there are native `LegacyIndexes` present, the request might fail with the following error: + +.LegacyIndexes error +[source, shell, role=nocopy wrap] +---- +ERROR: Source dumpfile has legacy indexes enabled, which we do not support. +User needs to manually follow instructions in neo4j logs to upgrade indexes. +---- + +Solution:: + +To resolve the issue, follow these steps: + +. Make sure you are at least on Neo4j version 4.4 or later. +See more information about link:https://neo4j.com/docs/upgrade-migration-guide/current/[upgrade and migration]. +. In your local graph, use the following commands to get a list of the indexes and their types. +This will also provide the sequential list of commands to drop and then recreate the indexes: ++ +.Return a list of indexes and their types +[source, cypher, role=noplay] +---- +CALL db.constraints() YIELD description +UNWIND ["DROP", "CREATE"] AS command +RETURN command + " " + description +---- ++ +. In Neo4j Browser, select the "Enable multi statement query editor" option under the browser settings. +. Take the list of commands from the 2nd step and copy them in one list of multiple queries into Browser and run those queries. +. After the indexes are recreated, try the `database upload` command again. + +=== `InconsistentData` + +This error message will likely trigger when Neo4j Aura cannot safely load the data provided due to inconsistencies. + +Solution:: + +If you encounter this error, please raise a ticket with our link:https://support.neo4j.com[Customer Support] team. + +=== `UnsupportedStoreFormat` + +You may get this error if the store you are uploading is in a Neo4j version that is not directly supported in Neo4j Aura. + +Solution:: + +. link:https://neo4j.com/docs/upgrade-migration-guide/current/[Upgrade your database]. +Make sure you are on Neo4j 4.4 or later. +. If you encounter problems upgrading, please raise a ticket with our link:https://support.neo4j.com[Customer Support] team. + +=== `LogicalRestrictions` + +You may get this error when the store you are uploading exceeds the logical limits of your database. + +Solution:: + +. Delete nodes and relationships to ensure the data is within the specified limits for your instance, and try the upload again. +. If you are confident you have not exceeded these limits, please raise a ticket with our link:https://support.neo4j.com[Customer Support] team. + +=== `Fallback` + +This error can be triggered when the uploaded file is not recognized as a valid Neo4j dump file. + +Solution:: + +. Check the file and try again. +. If you are confident the file being uploaded is correct, please raise a ticket with our link:https://support.neo4j.com[Customer Support] team. + +== Driver integration + +=== JavaScript routing table error + +JavaScript driver version 4.4.5 and greater assumes the existence of database connectivity. +When the connection fails, the two most common error messages are "Session Expired" or a routing table error: + +.Routing table error +[source, shell, role=nocopy wrap] +---- +Neo4jError: Could not perform discovery. +No routing servers available. +Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1644933316983, routers=[], readers=[], writers=[]] +---- + +This error can also be encountered when no default database is defined. + +Solution:: + +Verify connectivity before creating a session object, and specify the default database in your driver definition. + +.Verifying connectivity +[source, javascript] +---- +const session = driver.session({ database: "neo4j" }) +driver.verifyConnectivity() + +let session = driver.session(....) +---- + +[NOTE] +==== +Rapid session creation can exceed the database's maximum concurrent connection limit, resulting in the “Session Expired” error when creating more sessions. +==== + +