diff --git a/docs-2.0/2.quick-start/4.nebula-graph-crud.md b/docs-2.0/2.quick-start/4.nebula-graph-crud.md index 9923507b187..81294382fc0 100644 --- a/docs-2.0/2.quick-start/4.nebula-graph-crud.md +++ b/docs-2.0/2.quick-start/4.nebula-graph-crud.md @@ -85,6 +85,10 @@ In this topic, we will use the following dataset to demonstrate basic CRUD opera nebula> CREATE SPACE basketballplayer(partition_num=15, replica_factor=1, vid_type=fixed_string(30)); ``` + !!! note + + If the system returns the error `[ERROR (-1005)]: Host not enough!`, check whether [registered the Storage Service](../2.quick-start/3.1add-storage-hosts.md). + 2. Check the partition distribution with `SHOW HOSTS` to make sure that the partitions are distributed in a balanced way. ```ngql diff --git a/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md b/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md index 725cca1fe0b..6818a1cd1d8 100644 --- a/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md +++ b/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md @@ -31,7 +31,7 @@ | bit_and() | Bitwise AND. | | bit_or() | Bitwise OR. | | bit_xor() | Bitwise XOR. | - | int size() | Returns the number of elements in a list or a map. | + | int size() | Returns the number of elements in a list or a map or the length of a string. | | int range(int start, int end, int step) | Returns a list of integers from `[start,end]` in the specified steps. `step` is 1 by default. | | int sign(double x) | Returns the signum of the given number.
If the number is `0`, the system returns `0`.
If the number is negative, the system returns `-1`.
If the number is positive, the system returns `1`. | | double e() | Returns the base of the natural logarithm, e (2.718281828459045). | @@ -60,7 +60,7 @@ |string toLower(string a) | The same as `lower()`. | |string upper(string a) | Returns the argument in uppercase. | |string toUpper(string a) | The same as `upper()`. | - |int length(string a) | Returns the length of the given string in bytes. | + |int length(a) | Returns the length of the given string in bytes or the length of a path in hops. | |string trim(string a) | Removes leading and trailing spaces. | |string ltrim(string a) | Removes leading spaces. | |string rtrim(string a) | Removes trailing spaces. | @@ -82,13 +82,13 @@ |Function| Description | |---- | ----| - |int now() | Returns the current date and time of the system timezone. | - |timestamp timestamp() | Returns the current date and time of the system timezone. | + |int now() | Returns the current timestamp of the system. | + |timestamp timestamp() | Returns the current timestamp of the system. | |date date() | Returns the current UTC date based on the current system. | |time time() | Returns the current UTC time based on the current system. | |datetime datetime() | Returns the current UTC date and time based on the current system. | -* [Schema functions](../3.ngql-guide/6.functions-and-expressions/4.schema.md) +* [Schema-related functions](../3.ngql-guide/6.functions-and-expressions/4.schema.md) * For nGQL statements @@ -185,14 +185,14 @@ | Match vertices | `(v)` | You can use a user-defined variable in a pair of parentheses to represent a vertex in a pattern. For example: `(v)`. | | Match tags | `MATCH (v:player) RETURN v` | You can specify a tag with `:` after the vertex in a pattern. | | Match multiple tags | `MATCH (v:player:team) RETURN v LIMIT 10` | To match vertices with multiple tags, use colons (:). | - | Match vertex properties | `MATCH (v:player{name:"Tim Duncan"}) RETURN v` | You can specify a vertex property with `{: }` after the tag in a pattern. | + | Match vertex properties | `MATCH (v:player{name:"Tim Duncan"}) RETURN v`

`MATCH (v) WITH v, properties(v) as props, keys(properties(v)) as kk LIMIT 10000 WHERE [i in kk where props[i] == "Tim Duncan"] RETURN v` | You can specify a vertex property with `{: }` after the tag in a pattern; or use a vertex property value to get vertices directly. | | Match a VID. | `MATCH (v) WHERE id(v) == 'player101' RETURN v` | You can use the VID to match a vertex. The `id()` function can retrieve the VID of a vertex. | | Match multiple VIDs. | `MATCH (v:player { name: 'Tim Duncan' })--(v2) WHERE id(v2) IN ["player101", "player102"] RETURN v2` | To match multiple VIDs, use `WHERE id(v) IN [vid_list]`. | | Match connected vertices | `MATCH (v:player{name:"Tim Duncan"})--(v2) RETURN v2.player.name AS Name` | You can use the `--` symbol to represent edges of both directions and match vertices connected by these edges. You can add a `>` or `<` to the `--` symbol to specify the direction of an edge. | | Match paths | `MATCH p=(v:player{name:"Tim Duncan"})-->(v2) RETURN p` | Connected vertices and edges form a path. You can use a user-defined variable to name a path as follows. | | Match edges | `MATCH (v:player{name:"Tim Duncan"})-[e]-(v2) RETURN e`
`MATCH ()<-[e]-() RETURN e LIMIT 3` | Besides using `--`, `-->`, or `<--` to indicate a nameless edge, you can use a user-defined variable in a pair of square brackets to represent a named edge. For example: `-[e]-`. | | Match an edge type | `MATCH ()-[e:follow]-() RETURN e LIMIT 5` |Just like vertices, you can specify an edge type with `:` in a pattern. For example: `-[e:follow]-`. | - | Match edge type properties | ` MATCH (v:player{name:"Tim Duncan"})-[e:follow{degree:95}]->(v2) RETURN e` | You can specify edge type properties with `{: }` in a pattern. For example: `[e:follow{likeness:95}]`. | + | Match edge type properties | ` MATCH (v:player{name:"Tim Duncan"})-[e:follow{degree:95}]->(v2) RETURN e`

`MATCH ()-[e]->() WITH e, properties(e) as props, keys(properties(e)) as kk LIMIT 10000 WHERE [i in kk where props[i] == 90] RETURN e`| You can specify edge type properties with `{: }` in a pattern. For example: `[e:follow{likeness:95}]`; or use an edge type property value to get edges directly. | | Match multiple edge types | `MATCH (v:player{name:"Tim Duncan"})-[e:follow | :serve]->(v2) RETURN e` | The `|` symbol can help matching multiple edge types. For example: `[e:follow|:serve]`. The English colon (:) before the first edge type cannot be omitted, but the English colon before the subsequent edge type can be omitted, such as `[e:follow|serve]`. | | Match multiple edges | `MATCH (v:player{name:"Tim Duncan"})-[]->(v2)<-[e:serve]-(v3) RETURN v2, v3` | You can extend a pattern to match multiple edges in a path. | | Match fixed-length paths | `MATCH p=(v:player{name:"Tim Duncan"})-[e:follow*2]->(v2) RETURN DISTINCT v2 AS Friends` | You can use the `:*` pattern to match a fixed-length path. `hop` must be a non-negative integer. The data type of `e` is the list.| diff --git a/docs-2.0/20.appendix/0.FAQ.md b/docs-2.0/20.appendix/0.FAQ.md index ce656c8f9ff..4c172e8e808 100644 --- a/docs-2.0/20.appendix/0.FAQ.md +++ b/docs-2.0/20.appendix/0.FAQ.md @@ -166,7 +166,7 @@ The reason for this error may be that the amount of data to be queried is too la - When importing data, set [Compaction](../8.service-tuning/compaction.md) manually to make read faster. -- Extend the RPC connection timeout of the Graph service and the Storage service. Modify the value of `--storage_client_timeout_ms` in the `nebula-storaged.conf` file. This configuration is measured in milliseconds (ms). The default value is 60000ms. +- Extend the RPC connection timeout of the Graph service and the Storage service. Modify the value of `--storage_client_timeout_ms` in the `nebula-graphd.conf` file. This configuration is measured in milliseconds (ms). The default value is 60000ms. ### "How to resolve the error `MetaClient.cpp:65] Heartbeat failed, status:Wrong cluster!` in `nebula-storaged.INFO`, or `HBProcessor.cpp:54] Reject wrong cluster host "x.x.x.x":9771!` in `nebula-metad.INFO`?" @@ -293,7 +293,7 @@ Or get vertices by each tag, and then group them by yourself. Yes, for more information, see [Keywords and reserved words](../3.ngql-guide/1.nGQL-overview/keywords-and-reserved-words.md). -### "How to get the out-degree/the in-degree of a vertex with a given name?" +### "How to get the out-degree/the in-degree of a given vertex?" The out-degree of a vertex refers to the number of edges starting from that vertex, while the in-degree refers to the number of edges pointing to that vertex. @@ -388,9 +388,9 @@ If you have not modified the predefined ports in the [Configurations](../5.confi | Service | Port | |---------|---------------------------| -| Meta | 9559, 9560, 19559, 19560 | -| Graph | 9669, 19669, 19670 | -| Storage | 9777 ~ 9780, 19779, 19780 | +| Meta | 9559, 9560, 19559 | +| Graph | 9669, 19669 | +| Storage | 9777 ~ 9780, 19779| If you have customized the configuration files and changed the predefined ports, find the port numbers in your configuration files and open them on the firewalls. diff --git a/docs-2.0/20.appendix/6.eco-tool-version.md b/docs-2.0/20.appendix/6.eco-tool-version.md index 900f714da10..c7a8ab3b092 100644 --- a/docs-2.0/20.appendix/6.eco-tool-version.md +++ b/docs-2.0/20.appendix/6.eco-tool-version.md @@ -146,6 +146,18 @@ Docker Compose can quickly deploy NebulaGraph clusters. For how to use it, pleas |:---|:---| | {{ nebula.tag }} | {{br.tag}}| + +{{ent.ent_begin}} +## Backup & Restore Enterprise Edition + +Backup Restore (BR for short) Enterprise Edition is a Command-Line Interface (CLI) tool. With BR Enterprise Edition, you can back up and restore NebulaGraph Enterprise Edition data. + +|NebulaGraph version|BR version| +|:---|:---| +| {{ nebula.tag }} | {{br_ent.tag}}| + +{{ent.ent_end}} + ## NebulaGraph Bench [NebulaGraph Bench](https://github.com/vesoft-inc/nebula-bench/releases/tag/{{bench.tag}}) is used to test the baseline performance data of NebulaGraph. It uses the standard data set of LDBC. diff --git a/docs-2.0/20.appendix/error-code.md b/docs-2.0/20.appendix/error-code.md index ee8a0ec46a6..83d7fe214c3 100644 --- a/docs-2.0/20.appendix/error-code.md +++ b/docs-2.0/20.appendix/error-code.md @@ -10,180 +10,185 @@ NebulaGraph returns an error code when an error occurs. This topic describes the - When the code returned is `0`, it means that the operation is successful. -|Error Code|Description| -|:---|:---| -|`-1`| Lost connection | -|`-2`| Unable to establish connection | -|`-3`| RPC failure | -|`-4`| Raft leader has been changed| -|`-5`| Graph space does not exist | -|`-6`| Tag does not exist | -|`-7`| Edge type does not exist | -|`-8`| Index does not exist| -|`-9`| Edge type property does not exist| -|`-10`| Tag property does not exist| -|`-11`| The current role does not exist| -|`-12`| The current configuration does not exist| -|`-13`| The current host does not exist| -|`-15`| Listener does not exist| -|`-16`| The current partition does not exist| -|`-17`| Key does not exist| -|`-18`| User does not exist| -|`-19`| Statistics do not exist| -|`-20`| No current service found| -|`-21`| Drainer does not exist| -|`-22`| Drainer client does not exist| -|`-24`| Backup failed| -|`-25`| The backed-up table is empty| -|`-26`| Table backup failure| -|`-27`| MultiGet could not get all data| -|`-28`| Index rebuild failed| -|`-29`| Password is invalid| -|`-30`| Unable to get absolute path| -|`-1001`| Authentication failed| -|`-1002`| Invalid session| -|`-1003`| Session timeout| -|`-1004`| Syntax error| -|`-1005`| Execution error| -|`-1006`| Statement is empty| -|`-1008`| Permission denied| -|`-1009`| Semantic error| -|`-1010`| Maximum number of connections exceeded| -|`-1011`| Access to storage failed (only some requests succeeded)| -|`-2001`| Host does not exist| -|`-2002`| Host already exists| -|`-2003`| Invalid host| -|`-2004`| The current command, statement, or function is not supported| -|`-2007`| Configuration items cannot be changed| -|`-2008`| Parameters conflict with meta data| -|`-2009`| Invalid parameter| -|`-2010`| Wrong cluster| -|`-2011`| Listener conflicts| -|`-2012`| Host not exist| -|`-2013`| Schema name already exists| -|`-2014`| There are still indexes related to tag or edge, cannot drop it| -|`-2015`| There are still some space on the host, cannot drop it| -|`-2021`| Failed to store data| -|`-2022`| Illegal storage segment| -|`-2023`| Invalid data balancing plan| -|`-2024`| The cluster is already in the data balancing status| -|`-2025`| There is no running data balancing plan| -|`-2026`| Lack of valid hosts| -|`-2027`| A data balancing plan that has been corrupted| -|`-2029`| Lack of valid drainers| -|`-2030`| Failed to recover user role| -|`-2031`| Number of invalid partitions| -|`-2032`| Invalid replica factor| -|`-2033`| Invalid character set| -|`-2034`| Invalid character sorting rules| -|`-2035`| Character set and character sorting rule mismatch| -|`-2040`| Failed to generate a snapshot| -|`-2041`| Failed to write block data| -|`-2044`| Failed to add new task| -|`-2045`| Failed to stop task| -|`-2046`| Failed to save task information| -|`-2047`| Data balancing failed| -|`-2048`| The current task has not been completed| -|`-2049`| Task report failed| -|`-2050`| The current task is not in the graph space| -|`-2051`| The current task needs to be resumed| -|`-2052`| The job status has already been failed or finished | -|`-2053`| Job default status| -|`-2054`| The given job do not support stop| -|`-2055`| The leader distribution has not been reported, so can't send task to storage| -|`-2065`| Invalid task| -|`-2066`| Backup terminated (index being created)| -|`-2067`| Graph space does not exist at the time of backup| -|`-2068`| Backup recovery failed| -|`-2069`| Session does not exist| -|`-2070`| Failed to get cluster information| -|`-2071`| Failed to get absolute path when getting cluster information| -|`-2072`| Unable to get an agent when getting cluster information| -|`-2073`| Query not found| -|`-2074`| Failed to receive heartbeat from agent| -|`-2080`| Invalid variable| -|`-2081`| Variable value and type do not match| -|`-3001`| Consensus cannot be reached during an election| -|`-3002`| Key already exists| -|`-3003`| Data type mismatch| -|`-3004`| Invalid field value| -|`-3005`| Invalid operation| -|`-3006`| Current value is not allowed to be empty| -|`-3007`| Field value must be set if the field value is `NOT NULL` or has no default value| -|`-3008`| The value is out of the range of the current type| -|`-3010`| Data conflict| -|`-3011`| Writes are delayed| -|`-3021`| Incorrect data type| -|`-3022`| Invalid VID length| -|`-3031`| Invalid filter| -|`-3032`| Invalid field update| -|`-3033`| Invalid KV storage| -|`-3034`| Peer invalid| -|`-3035`| Out of retries| -|`-3036`| Leader change failed| -|`-3037`| Invalid stat type| -|`-3038`| VID is invalid| -|`-3040`| Failed to load meta information| -|`-3041`| Failed to generate checkpoint| -|`-3042`| Generating checkpoint is blocked| -|`-3043`| Data is filtered| -|`-3044`| Invalid data| -|`-3045`| Concurrent write conflicts on the same edge| -|`-3046`| Concurrent write conflict on the same vertex | -|`-3047`| Lock is invalid| -|`-3051`| Invalid task parameter| -|`-3052`| The user canceled the task| -|`-3053`| Task execution failed| -|`-3060`| Execution plan was cleared| -|`-3061`| Client and server versions are not compatible| -|`-3062`| Failed to get ID serial number| -|`-3070`| The heartbeat process was not completed when the request was received| -|`-3071`| Out-of-date heartbeat received from the old leader (the new leader has been elected)| -|`-3073`| Concurrent write conflicts with later requests| -|`-3500`| Unknown partition| -|`-3501`| Raft logs lag behind| -|`-3502`| Raft logs are out of date| -|`-3503`| Heartbeat messages are out of date| -|`-3504`| Unknown additional logs| -|`-3511`| Waiting for the snapshot to complete| -|`-3512`| There was an error sending the snapshot| -|`-3513`| Invalid receiver| -|`-3514`| Raft did not start| -|`-3515`| Raft has stopped| -|`-3516`| Wrong role| -|`-3521`| Write to a WAL failed| -|`-3522`| The host has stopped| -|`-3523`| Too many requests| -|`-3524`| Persistent snapshot failed| -|`-3525`| RPC exception| -|`-3526`| No WAL logs found| -|`-3527`| Host suspended| -|`-3528`| Writes are blocked| -|`-3529`| Cache overflow| -|`-3530`| Atomic operation failed| -|`-3531`| Leader lease expired| -|`-3532`| Data has been synchronized on Raft| -|`-4001`| Drainer logs lag behind| -|`-4002`| Drainer logs are out of date| -|`-4003`| The drainer data storage is invalid| -|`-4004`| Graph space mismatch| -|`-4005`| Partition mismatch| -|`-4006`| Data conflict| -|`-4007`| Request conflict| -|`-4008`| Illegal data| -|`-5001`| Cache configuration error| -|`-5002`| Insufficient space| -|`-5003`| No cache hit| -|`-5005`| Write cache failed| -|`-7001`| Number of machines exceeded the limit| -|`-7002`| Failed to resolve certificate| -|`-8000`| Unknown error| +|Error name|Error Code|Description| +|:---|:---|:---| +|`E_DISCONNECTED`|`-1`| Lost connection | +|`E_FAIL_TO_CONNECT`|`-2`| Unable to establish connection | +|`E_RPC_FAILURE`|`-3`| RPC failure | +|`E_LEADER_CHANGED`|`-4`| Raft leader has been changed| +|`E_SPACE_NOT_FOUND`|`-5`| Graph space does not exist | +|`E_TAG_NOT_FOUND`|`-6`| Tag does not exist | +|`E_EDGE_NOT_FOUND`|`-7`| Edge type does not exist | +|`E_INDEX_NOT_FOUND`|`-8`| Index does not exist| +|`E_EDGE_PROP_NOT_FOUND`|`-9`| Edge type property does not exist| +|`E_TAG_PROP_NOT_FOUND`|`-10`| Tag property does not exist| +|`E_ROLE_NOT_FOUND`|`-11`| The current role does not exist| +|`E_CONFIG_NOT_FOUND`|`-12`| The current configuration does not exist| +|`E_MACHINE_NOT_FOUND`|`-13`| The current host does not exist| +|`E_LISTENER_NOT_FOUND`|`-15`| Listener does not exist| +|`E_PART_NOT_FOUND`|`-16`| The current partition does not exist| +|`E_KEY_NOT_FOUND`|`-17`| Key does not exist| +|`E_USER_NOT_FOUND`|`-18`| User does not exist| +|`E_STATS_NOT_FOUND`|`-19`| Statistics do not exist| +|`E_SERVICE_NOT_FOUND`|`-20`| No current service found| +|`E_BACKUP_FAILED`|`-24`| Backup failed| +|`E_BACKUP_EMPTY_TABLE`|`-25`| The backed-up table is empty| +|`E_BACKUP_TABLE_FAILED`|`-26`| Table backup failure| +|`E_PARTIAL_RESULT`|`-27`| MultiGet could not get all data| +|`E_REBUILD_INDEX_FAILED`|`-28`| Index rebuild failed| +|`E_INVALID_PASSWORD`|`-29`| Password is invalid| +|`E_FAILED_GET_ABS_PATH`|`-30`| Unable to get absolute path| +|`E_BAD_USERNAME_PASSWORD`|`-1001`| Authentication failed| +|`E_SESSION_INVALID`|`-1002`| Invalid session| +|`E_SESSION_TIMEOUT`|`-1003`| Session timeout| +|`E_SYNTAX_ERROR`|`-1004`| Syntax error| +|`E_EXECUTION_ERROR`|`-1005`| Execution error| +|`E_STATEMENT_EMPTY`|`-1006`| Statement is empty| +|`E_BAD_PERMISSION`|`-1008`| Permission denied| +|`E_SEMANTIC_ERROR`|`-1009`| Semantic error| +|`E_TOO_MANY_CONNECTIONS`|`-1010`| Maximum number of connections exceeded| +|`E_PARTIAL_SUCCEEDED`|`-1011`| Access to storage failed (only some requests succeeded)| +|`E_NO_HOSTS`|`-2001`| Host does not exist| +|`E_EXISTED`|`-2002`| Host already exists| +|`E_INVALID_HOST`|`-2003`| Invalid host| +|`E_UNSUPPORTED`|`-2004`| The current command, statement, or function is not supported| +|`E_NOT_DROP`|`-2005`|Not allowed to drop| +|`E_CONFIG_IMMUTABLE`|`-2007`| Configuration items cannot be changed| +|`E_CONFLICT`|`-2008`| Parameters conflict with meta data| +|`E_INVALID_PARM`|`-2009`| Invalid parameter| +|`E_WRONGCLUSTER`|`-2010`| Wrong cluster| +|`E_ZONE_NOT_ENOUGH`|`-2011`| Listener conflicts| +|`E_ZONE_IS_EMPTY`|`-2012`| Host not exist| +|`E_SCHEMA_NAME_EXISTS`|`-2013`| Schema name already exists| +|`E_RELATED_INDEX_EXISTS`|`-2014`| There are still indexes related to tag or edge, cannot drop it| +|`E_RELATED_SPACE_EXISTS`|`-2015`| There are still some space on the host, cannot drop it| +|`E_STORE_FAILURE`|`-2021`| Failed to store data| +|`E_STORE_SEGMENT_ILLEGAL`|`-2022`| Illegal storage segment| +|`E_BAD_BALANCE_PLAN`|`-2023`| Invalid data balancing plan| +|`E_BALANCED`|`-2024`| The cluster is already in the data balancing status| +|`E_NO_RUNNING_BALANCE_PLAN`|`-2025`| There is no running data balancing plan| +|`E_NO_VALID_HOST`|`-2026`| Lack of valid hosts| +|`E_CORRUPTED_BALANCE_PLAN`|`-2027`| A data balancing plan that has been corrupted| +|`E_IMPROPER_ROLE`|`-2030`| Failed to recover user role| +|`E_INVALID_PARTITION_NUM`|`-2031`| Number of invalid partitions| +|`E_INVALID_REPLICA_FACTOR`|`-2032`| Invalid replica factor| +|`E_INVALID_CHARSET`|`-2033`| Invalid character set| +|`E_INVALID_COLLATE`|`-2034`| Invalid character sorting rules| +|`E_CHARSET_COLLATE_NOT_MATCH`|`-2035`| Character set and character sorting rule mismatch| +|`E_SNAPSHOT_FAILURE`|`-2040`| Failed to generate a snapshot| +|`E_BLOCK_WRITE_FAILURE`|`-2041`| Failed to write block data| +|`E_ADD_JOB_FAILURE`|`-2044`| Failed to add new task| +|`E_STOP_JOB_FAILURE`|`-2045`| Failed to stop task| +|`E_SAVE_JOB_FAILURE`|`-2046`| Failed to save task information| +|`E_BALANCER_FAILURE`|`-2047`| Data balancing failed| +|`E_JOB_NOT_FINISHED`|`-2048`| The current task has not been completed| +|`E_TASK_REPORT_OUT_DATE`|`-2049`| Task report failed| +|`E_JOB_NOT_IN_SPACE`|`-2050`| The current task is not in the graph space| +|`E_JOB_NEED_RECOVER`|`-2051`| The current task needs to be resumed| +|`E_JOB_ALREADY_FINISH`|`-2052`| The job status has already been failed or finished | +|`E_JOB_SUBMITTED`|`-2053`| Job default status| +|`E_JOB_NOT_STOPPABLE`|`-2054`| The given job do not support stop| +|`E_JOB_HAS_NO_TARGET_STORAGE`|`-2055`| The leader distribution has not been reported, so can't send task to storage| +|`E_INVALID_JOB`|`-2065`| Invalid task| +|`E_BACKUP_BUILDING_INDEX`|`-2066`| Backup terminated (index being created)| +|`E_BACKUP_SPACE_NOT_FOUND`|`-2067`| Graph space does not exist at the time of backup| +|`E_RESTORE_FAILURE`|`-2068`| Backup recovery failed| +|`E_SESSION_NOT_FOUND`|`-2069`| Session does not exist| +|`E_LIST_CLUSTER_FAILURE`|`-2070`| Failed to get cluster information| +|`E_LIST_CLUSTER_GET_ABS_PATH_FAILURE`|`-2071`| Failed to get absolute path when getting cluster information| +|`E_LIST_CLUSTER_NO_AGENT_FAILURE`|`-2072`| Unable to get an agent when getting cluster information| +|`E_QUERY_NOT_FOUND`|`-2073`| Query not found| +|`E_AGENT_HB_FAILUE`|`-2074`| Failed to receive heartbeat from agent| +|`E_CONSENSUS_ERROR`|`-3001`| Consensus cannot be reached during an election| +|`E_KEY_HAS_EXISTS`|`-3002`| Key already exists| +|`E_DATA_TYPE_MISMATCH`|`-3003`| Data type mismatch| +|`E_INVALID_FIELD_VALUE`|`-3004`| Invalid field value| +|`E_INVALID_OPERATION`|`-3005`| Invalid operation| +|`E_NOT_NULLABLE`|`-3006`| Current value is not allowed to be empty| +|`E_FIELD_UNSET`|`-3007`| Field value must be set if the field value is `NOT NULL` or has no default value| +|`E_OUT_OF_RANGE`|`-3008`| The value is out of the range of the current type| +|`E_DATA_CONFLICT_ERROR`|`-3010`| Data conflict| +|`E_WRITE_STALLED`|`-3011`| Writes are delayed| +|`E_IMPROPER_DATA_TYPE`|`-3021`| Incorrect data type| +|`E_INVALID_SPACEVIDLEN`|`-3022`| Invalid VID length| +|`E_INVALID_FILTER`|`-3031`| Invalid filter| +|`E_INVALID_UPDATER`|`-3032`| Invalid field update| +|`E_INVALID_STORE`|`-3033`| Invalid KV storage| +|`E_INVALID_PEER`|`-3034`| Peer invalid| +|`E_RETRY_EXHAUSTED`|`-3035`| Out of retries| +|`E_TRANSFER_LEADER_FAILED`|`-3036`| Leader change failed| +|`E_INVALID_STAT_TYPE`|`-3037`| Invalid stat type| +|`E_INVALID_VID`|`-3038`| VID is invalid| +|`E_LOAD_META_FAILED`|`-3040`| Failed to load meta information| +|`E_FAILED_TO_CHECKPOINT`|`-3041`| Failed to generate checkpoint| +|`E_CHECKPOINT_BLOCKED`|`-3042`| Generating checkpoint is blocked| +|`E_FILTER_OUT`|`-3043`| Data is filtered| +|`E_INVALID_DATA`|`-3044`| Invalid data| +|`E_MUTATE_EDGE_CONFLICT`|`-3045`| Concurrent write conflicts on the same edge| +|`E_MUTATE_TAG_CONFLICT`|`-3046`| Concurrent write conflict on the same vertex | +|`E_OUTDATED_LOCK`|`-3047`| Lock is invalid| +|`E_INVALID_TASK_PARA`|`-3051`| Invalid task parameter| +|`E_USER_CANCEL`|`-3052`| The user canceled the task| +|`E_TASK_EXECUTION_FAILED`|`-3053`| Task execution failed| +|`E_PLAN_IS_KILLED`|`-3060`| Execution plan was cleared| +|`E_NO_TERM`|`-3070`| The heartbeat process was not completed when the request was received| +|`E_OUTDATED_TERM`|`-3071`| Out-of-date heartbeat received from the old leader (the new leader has been elected)| +|`E_WRITE_WRITE_CONFLICT`|`-3073`| Concurrent write conflicts with later requests| +|`E_RAFT_UNKNOWN_PART`|`-3500`| Unknown partition| +|`E_RAFT_LOG_GAP`|`-3501`| Raft logs lag behind| +|`E_RAFT_LOG_STALE`|`-3502`| Raft logs are out of date| +|`E_RAFT_TERM_OUT_OF_DATE`|`-3503`| Heartbeat messages are out of date| +|`E_RAFT_UNKNOWN_APPEND_LOG`|`-3504`| Unknown additional logs| +|`E_RAFT_WAITING_SNAPSHOT`|`-3511`| Waiting for the snapshot to complete| +|`E_RAFT_SENDING_SNAPSHOT`|`-3512`| There was an error sending the snapshot| +|`E_RAFT_INVALID_PEER`|`-3513`| Invalid receiver| +|`E_RAFT_NOT_READY`|`-3514`| Raft did not start| +|`E_RAFT_STOPPED`|`-3515`| Raft has stopped| +|`E_RAFT_BAD_ROLE`|`-3516`| Wrong role| +|`E_RAFT_WAL_FAIL`|`-3521`| Write to a WAL failed| +|`E_RAFT_HOST_STOPPED`|`-3522`| The host has stopped| +|`E_RAFT_TOO_MANY_REQUESTS`|`-3523`| Too many requests| +|`E_RAFT_PERSIST_SNAPSHOT_FAILED`|`-3524`| Persistent snapshot failed| +|`E_RAFT_RPC_EXCEPTION`|`-3525`| RPC exception| +|`E_RAFT_NO_WAL_FOUND`|`-3526`| No WAL logs found| +|`E_RAFT_HOST_PAUSED`|`-3527`| Host suspended| +|`E_RAFT_WRITE_BLOCKED`|`-3528`| Writes are blocked| +|`E_RAFT_BUFFER_OVERFLOW`|`-3529`| Cache overflow| +|`E_RAFT_ATOMIC_OP_FAILED`|`-3530`| Atomic operation failed| +|`E_LEADER_LEASE_FAILED`|`-3531`| Leader lease expired| +|`E_RAFT_CAUGHT_UP`|`-3532`| Data has been synchronized on Raft| diff --git a/docs-2.0/20.appendix/release-notes/dashboard-ent-release-note.md b/docs-2.0/20.appendix/release-notes/dashboard-ent-release-note.md index 3fc0475bbc6..4933e91dd88 100644 --- a/docs-2.0/20.appendix/release-notes/dashboard-ent-release-note.md +++ b/docs-2.0/20.appendix/release-notes/dashboard-ent-release-note.md @@ -1,5 +1,37 @@ # NebulaGraph Dashboard Enterprise Edition release notes +## Enterprise Edition v3.2.4 + +- Enhancement + + - Close experimental features by default when installing NebulaGraph Enterprise Edition 3.1.3 or 3.4. + +## Enterprise Edition v3.2.3 + +- Enhancement + + - Hidden Backup&Restore page if the NebulaGraph Enterprise Edition version is above 3.3.0. + +## Enterprise Edition v3.2.2 + +- Enhancement + + - Delete unnecessary public folders. + +- Bugfix + + - Fixed the bug that the RPM and DEB packages could not automatically register services with the Dashboard. + +## Enterprise Edition v3.2.1 + +- Enhancement + + - Add NebulaGraph 3.3.0 version to download list. + +- Bugfix + + - Fixed the bug that the BR failed in NebulaGraph Community 3.3.0 version. + ## Enterprise Edition 3.2.0 - Feature diff --git a/docs-2.0/20.appendix/release-notes/explorer-release-note.md b/docs-2.0/20.appendix/release-notes/explorer-release-note.md index 3b71e964a7f..3073e0ba455 100644 --- a/docs-2.0/20.appendix/release-notes/explorer-release-note.md +++ b/docs-2.0/20.appendix/release-notes/explorer-release-note.md @@ -1,5 +1,11 @@ # NebulaGraph Explorer release notes +## v3.2.1 + +- Bugfix + - Fixed the bug that the connection timeout and HTTP error `500` when connecting to a non-existent address. + - Fixed the bug that the vertex properties could not be displayed on the canvas when randomly importing vertices. + ## v3.2.0 - Feature @@ -23,6 +29,7 @@ - The help page provides introductory videos. - Workflow supports the configuration of resources on the page. - Added a white screen page for the crash. + - Optimize page loading speed. - Bugfix - Fixed the bug that the right-click menu would not collapse automatically. diff --git a/docs-2.0/3.ngql-guide/13.edge-statements/1.insert-edge.md b/docs-2.0/3.ngql-guide/13.edge-statements/1.insert-edge.md index cee24469ea0..8b756f6837d 100644 --- a/docs-2.0/3.ngql-guide/13.edge-statements/1.insert-edge.md +++ b/docs-2.0/3.ngql-guide/13.edge-statements/1.insert-edge.md @@ -2,7 +2,7 @@ The `INSERT EDGE` statement inserts an edge or multiple edges into a graph space from a source vertex (given by src_vid) to a destination vertex (given by dst_vid) with a specific rank in NebulaGraph. -When inserting an edge that already exists, `INSERT VERTEX` **overrides** the edge. +When inserting an edge that already exists, `INSERT EDGE` **overrides** the edge. ## Syntax diff --git a/docs-2.0/3.ngql-guide/16.subgraph-and-path/1.get-subgraph.md b/docs-2.0/3.ngql-guide/16.subgraph-and-path/1.get-subgraph.md index 2aebd01fdd8..bc7ba12f392 100644 --- a/docs-2.0/3.ngql-guide/16.subgraph-and-path/1.get-subgraph.md +++ b/docs-2.0/3.ngql-guide/16.subgraph-and-path/1.get-subgraph.md @@ -35,8 +35,9 @@ While using the `WHERE` clause in a `GET SUBGRAPH` statement, note the following - **Only support** the `AND` operator. - **Only support** filter destination vertex, the vertex format must be `$$.tagName.propName`. +- **Support** filter edge, the edge format must be `edge_type.propName`. - **Support** math functions, aggregate functions, string functions, datetime functions, type conversion functions and general functions in list functions. -- **Not support** aggregate functions, schema functions, conditional expression, predicate functions, geography function and user-defined functions. +- **Not support** aggregate functions, schema-related functions, conditional expression, predicate functions, geography function and user-defined functions. ## Examples diff --git a/docs-2.0/3.ngql-guide/3.data-types/4.date-and-time.md b/docs-2.0/3.ngql-guide/3.data-types/4.date-and-time.md index e290b6ceb58..0bd4e23f70c 100644 --- a/docs-2.0/3.ngql-guide/3.data-types/4.date-and-time.md +++ b/docs-2.0/3.ngql-guide/3.data-types/4.date-and-time.md @@ -96,9 +96,9 @@ The `TIMESTAMP` data type is used for values that contain both date and time par - Supported `TIMESTAMP` inserting methods: timestamp, `timestamp()` function, and `now()` function. -- `timestamp()` function accepts empty arguments to get the timestamp of the current timezone. +- `timestamp()` function accepts empty arguments to get the current timestamp. It can pass an integer arguments to identify the integer as a timestamp and the range of passed integer is: `0~9223372036`。 -- `timestamp()` function can convert `DATETIME` to `TIMESTAMP`. The data type of `DATETIME` should be a `string`. +- `timestamp()` function can convert `DATETIME` to `TIMESTAMP`, and the data type of `DATETIME` should be a `string`. - The underlying storage data type is **int64**. diff --git a/docs-2.0/3.ngql-guide/4.variable-and-composite-queries/3.property-reference.md b/docs-2.0/3.ngql-guide/4.variable-and-composite-queries/3.property-reference.md index dba3a2b24e8..c98378461e7 100644 --- a/docs-2.0/3.ngql-guide/4.variable-and-composite-queries/3.property-reference.md +++ b/docs-2.0/3.ngql-guide/4.variable-and-composite-queries/3.property-reference.md @@ -95,7 +95,7 @@ nebula> GO FROM "player100" OVER follow YIELD follow._src, follow._dst, follow._ !!! compatibility "Legacy version compatibility" - NebulaGraph 2.6.0 and later versions support the new [Schema function](../6.functions-and-expressions/4.schema.md). Similar statements as the above examples are written as follows in {{ nebula.release}}. + NebulaGraph 2.6.0 and later versions support the new [Schema-related functions](../6.functions-and-expressions/4.schema.md). Similar statements as the above examples are written as follows in {{ nebula.release}}. ```ngql GO FROM "player100" OVER follow YIELD properties($^).name AS startName, properties($$).age AS endAge; diff --git a/docs-2.0/3.ngql-guide/6.functions-and-expressions/1.math.md b/docs-2.0/3.ngql-guide/6.functions-and-expressions/1.math.md index 734f99bc5b5..6e5331a3400 100644 --- a/docs-2.0/3.ngql-guide/6.functions-and-expressions/1.math.md +++ b/docs-2.0/3.ngql-guide/6.functions-and-expressions/1.math.md @@ -551,11 +551,12 @@ nebula> RETURN bit_xor(5,6); ## size() -size() returns the number of elements in a list or a map. +size() returns the number of elements in a list or a map, or the length of a string. -Syntax: `size()` +Syntax: `size({|})` - `expression`: An expression for a list or map. +- `string`: A specified string. - Result type: Int @@ -570,6 +571,15 @@ nebula> RETURN size([1,2,3,4]); +-----------------+ ``` +```ngql +nebula> RETURN size("basketballplayer") as size; ++------+ +| size | ++------+ +| 16 | ++------+ +``` + ## range() range() returns a list of integers from `[start,end]` in the specified steps. diff --git a/docs-2.0/3.ngql-guide/6.functions-and-expressions/15.aggregating.md b/docs-2.0/3.ngql-guide/6.functions-and-expressions/15.aggregating.md index 5bf09187c0f..699941f10c0 100644 --- a/docs-2.0/3.ngql-guide/6.functions-and-expressions/15.aggregating.md +++ b/docs-2.0/3.ngql-guide/6.functions-and-expressions/15.aggregating.md @@ -257,24 +257,32 @@ nebula> MATCH (n:player) \ | 25 | ["Joel Embiid", "Kyle Anderson"] | +-----+--------------------------------------------------------------------------+ ... -``` -## Aggregating example +nebula> GO FROM "player100" OVER serve \ + YIELD properties($$).name AS name \ + | GROUP BY $-.name \ + YIELD collect($-.name) AS name; ++-----------+ +| name | ++-----------+ +| ["Spurs"] | ++-----------+ -```ngql -nebula> GO FROM "player100" OVER follow YIELD dst(edge) AS dst, properties($$).age AS age \ - | GROUP BY $-.dst \ - YIELD \ - $-.dst AS dst, \ - toInteger((sum($-.age)/count($-.age)))+avg(distinct $-.age+1)+1 AS statistics; -+-------------+------------+ -| dst | statistics | -+-------------+------------+ -| "player125" | 84.0 | -| "player101" | 74.0 | -+-------------+------------+ -``` +nebula> LOOKUP ON player \ + YIELD player.age As playerage \ + | GROUP BY $-.playerage \ + YIELD collect($-.playerage) AS playerage; ++------------------+ +| playerage | ++------------------+ +| [22] | +| [47] | +| [43] | +| [25, 25] | ++------------------+ +... +``` ## std() @@ -312,4 +320,22 @@ nebula> MATCH (v:player) RETURN sum(v.player.age); +-------------------+ | 1698 | +-------------------+ -``` \ No newline at end of file +``` + +## Aggregating example + +```ngql +nebula> GO FROM "player100" OVER follow YIELD dst(edge) AS dst, properties($$).age AS age \ + | GROUP BY $-.dst \ + YIELD \ + $-.dst AS dst, \ + toInteger((sum($-.age)/count($-.age)))+avg(distinct $-.age+1)+1 AS statistics; ++-------------+------------+ +| dst | statistics | ++-------------+------------+ +| "player125" | 84.0 | +| "player101" | 74.0 | ++-------------+------------+ +``` + + diff --git a/docs-2.0/3.ngql-guide/6.functions-and-expressions/16.type-conversion.md b/docs-2.0/3.ngql-guide/6.functions-and-expressions/16.type-conversion.md index 3bccffb7e14..3b7be9f7a18 100644 --- a/docs-2.0/3.ngql-guide/6.functions-and-expressions/16.type-conversion.md +++ b/docs-2.0/3.ngql-guide/6.functions-and-expressions/16.type-conversion.md @@ -145,4 +145,5 @@ nebula> YIELD hash(toLower("HELLO NEBULA")); +-------------------------------+ | -8481157362655072082 | +-------------------------------+ -``` \ No newline at end of file +``` + diff --git a/docs-2.0/3.ngql-guide/6.functions-and-expressions/2.string.md b/docs-2.0/3.ngql-guide/6.functions-and-expressions/2.string.md index 187cea9bfc0..d02ab3e2c40 100644 --- a/docs-2.0/3.ngql-guide/6.functions-and-expressions/2.string.md +++ b/docs-2.0/3.ngql-guide/6.functions-and-expressions/2.string.md @@ -77,10 +77,10 @@ nebula> RETURN upper("Basketball_Player"); length() returns the length of the given string in bytes. -Syntax: `length()` +Syntax: `length({|})` - `string`: A specified string. - +- `path`: A specified path represented by a variable. - Result type: Int Example: @@ -94,6 +94,17 @@ nebula> RETURN length("basketball"); +----------------------+ ``` +```ngql +nebula> MATCH p=(v:player{name:"Tim Duncan"})-->(v2) return length(p); ++-----------+ +| length(p) | ++-----------+ +| 1 | +| 1 | +| 1 | ++-----------+ +``` + ## trim() trim() removes the spaces at the leading and trailing of the string. diff --git a/docs-2.0/3.ngql-guide/6.functions-and-expressions/3.date-and-time.md b/docs-2.0/3.ngql-guide/6.functions-and-expressions/3.date-and-time.md index f304db740d4..0681d403bc1 100644 --- a/docs-2.0/3.ngql-guide/6.functions-and-expressions/3.date-and-time.md +++ b/docs-2.0/3.ngql-guide/6.functions-and-expressions/3.date-and-time.md @@ -4,8 +4,8 @@ NebulaGraph supports the following built-in date and time functions: | Function | Description | |:-- |:-- | -| int now() | Returns the current date and time of the system time zone. | -| timestamp timestamp() | Returns the current date and time of the system time zone. | +| int now() | Returns the current timestamp of the system. | +| timestamp timestamp() | Returns the current timestamp of the system. | | date date() | Returns the current UTC date based on the current system. | | time time() | Returns the current UTC time based on the current system. | | datetime datetime() | Returns the current UTC date and time based on the current system. | diff --git a/docs-2.0/3.ngql-guide/6.functions-and-expressions/4.schema.md b/docs-2.0/3.ngql-guide/6.functions-and-expressions/4.schema.md index 096a5115ec3..89a1c26d84f 100644 --- a/docs-2.0/3.ngql-guide/6.functions-and-expressions/4.schema.md +++ b/docs-2.0/3.ngql-guide/6.functions-and-expressions/4.schema.md @@ -1,6 +1,6 @@ -# Schema functions +# Schema-related functions -This topic describes the schema functions supported by NebulaGraph. There are two types of schema functions, one for native nGQL statements and the other for openCypher-compatible statements. +This topic describes the schema-related functions supported by NebulaGraph. There are two types of schema-related functions, one for native nGQL statements and the other for openCypher-compatible statements. ## For nGQL statements @@ -114,6 +114,10 @@ nebula> GO FROM "player100" OVER follow \ +-------------+-------------+ ``` +!!! note + + The semantics of the query for the starting vertex with src(edge) and [properties(`$^`)](../5.operators/5.property-reference.md) are different. src(edge) indicates the starting vertex ID of the edge in the graph database, while properties(`$^`) indicates the data of the starting vertex where you start to expand the graph, such as the data of the starting vertex `player100` in the above GO statement. + ### dst(edge) dst(edge) returns the destination vertex ID of an edge. @@ -135,6 +139,10 @@ nebula> GO FROM "player100" OVER follow \ +-------------+-------------+ ``` +!!! note + + dst(edge) indicates the destination vertex ID of the edge in the graph database. + ### rank(edge) rank(edge) returns the rank value of an edge. @@ -339,7 +347,7 @@ nebula> MATCH (v:player{name:"Tim Duncan"})-[e]->() \ ### startNode() -startNode() visits an edge or a path and returns its information of source vertex ID, including VIDs, tags, properties, and values. +startNode() visits a path and returns its information of source vertex ID, including VIDs, tags, properties, and values. Syntax: `startNode()` @@ -357,7 +365,7 @@ nebula> MATCH p = (a :player {name : "Tim Duncan"})-[r:serve]-(t) \ ### endNode() -endNode() visits an edge or a path and returns its information of destination vertex ID, including VIDs, tags, properties, and values. +endNode() visits a path and returns its information of destination vertex ID, including VIDs, tags, properties, and values. Syntax: `endNode()` diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/2.match.md b/docs-2.0/3.ngql-guide/7.general-query-statements/2.match.md index 57ecaf5f21c..742b8262674 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/2.match.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/2.match.md @@ -8,7 +8,7 @@ The examples in this topic use the [basketballplayer](../1.nGQL-overview/1.overv ## Syntax -The syntax of `MATCH` is relatively more flexible compared with that of other query statements such as `GO` or `LOOKUP`. But generally, it can be summarized as follows. +The syntax of `MATCH` is relatively more flexible compared with that of other query statements such as `GO` or `LOOKUP`. The path type of the `MATCH` statement is `trail`. That is, only vertices can be repeatedly visited in the graph traversal. Edges cannot be repeatedly visited. For details, see [path](../../1.introduction/2.1.path.md). But generally, it can be summarized as follows. ```ngql MATCH [] RETURN []; @@ -22,7 +22,7 @@ MATCH [] RETURN []; - `clause_2`: The `ORDER BY` and `LIMIT` clauses are supported. -## Precautions +## Limitations !!! compatibility "Legacy version compatibility" @@ -30,19 +30,18 @@ MATCH [] RETURN []; !!! note - Currently the `match` statement cannot find dangling edges. + - Currently the `match` statement cannot find dangling edges. + - It is not supported to traverse the specified Tag and Edge Type at the same time when there is no index. For example, executing `MATCH (v:player)-[e:follow]->() RETURN e LIMIT N` an error will occur. -- The `MATCH` statement retrieves data according to the `RETURN` clause. - -- The path type of the `MATCH` statement is `trail`. That is, only vertices can be repeatedly visited in the graph traversal. Edges cannot be repeatedly visited. For details, see [path](../../1.introduction/2.1.path.md). +When no [index](../14.native-index-statements/1.create-native-index.md) has been created, the `MATCH` statements is only supported in the following cases. When you get an error executing the `MATCH` statement, you can create and rebuild the index and then execute the `MATCH` statement. - In a valid `MATCH` statement, the VID of a specific vertex must be specified with the id() function in the `WHERE` clause. There is no need to create an index. -- When traversing all vertices and edges with `MATCH`, such as `MATCH (v) RETURN v LIMIT N`, there is no need to create an index, but you need to use `LIMIT` to limit the number of output results. +- When traversing all vertices o r edges with `MATCH`, such as `MATCH (v) RETURN v LIMIT N`,`MATCH ()-[e]->() RETURN e LIMIT N`. -- When traversing all vertices of the specified Tag or edge of the specified Edge Type, such as `MATCH (v:player) RETURN v LIMIT N`, there is no need to create an index, but you need to use `LIMIT` to limit the number of output results. +- When traversing all vertices of the specified Tag, such as `MATCH (v:player) RETURN v LIMIT N`. -- In addition to the foregoing, make sure there is at least one index in the `MATCH` statement. How to create native indexes, see [CREATE INDEX](../3/../14.native-index-statements/1.create-native-index.md). +- When traversing all edges of the specified Edge Type(edges must have direction), such as `MATCH ()-[e:follow]->() RETURN e LIMIT N`. ## Using patterns in MATCH statements @@ -195,6 +194,21 @@ nebula> MATCH (v:player) \ In openCypher 9, `=` is the equality operator. However, in nGQL, `==` is the equality operator and `=` is the assignment operator (as in C++ or Java). + +You also use properties without specifying a tag to get vertices directly. For example, to get all the vertices with the vertex property value Tim Duncan. + +```ngql +nebula> MATCH (v) \ + WITH v, properties(v) as props, keys(properties(v)) as kk \ + LIMIT 10000 WHERE [i in kk where props[i] == "Tim Duncan"] \ + RETURN v; ++----------------------------------------------------+ +| v | ++----------------------------------------------------+ +| ("player100" :player{age: 42, name: "Tim Duncan"}) | ++----------------------------------------------------+ +``` + ### Match VIDs You can use the VID to match a vertex. The `id()` function can retrieve the VID of a vertex. @@ -399,6 +413,25 @@ nebula> MATCH (v:player{name:"Tim Duncan"})-[e:follow{degree:95}]->(v2) \ +--------------------------------------------------------+ ``` + +You also use properties without specifying an edge type to get edges directly. For example, to get all the edges with the edge property value 90. + +```ngql +nebula> MATCH ()-[e]->() \ + WITH e, properties(e) as props, keys(properties(e)) as kk \ + LIMIT 10000 WHERE [i in kk where props[i] == 90] \ + RETURN e; ++----------------------------------------------------+ +| e | ++----------------------------------------------------+ +| [:follow "player125"->"player100" @0 {degree: 90}] | +| [:follow "player140"->"player114" @0 {degree: 90}] | +| [:follow "player133"->"player144" @0 {degree: 90}] | +| [:follow "player133"->"player114" @0 {degree: 90}] | +... ++----------------------------------------------------+ +``` + ### Match multiple edge types The `|` symbol can help matching multiple edge types. For example: `[e:follow|:serve]`. The English colon (:) before the first edge type cannot be omitted, but the English colon before the subsequent edge type can be omitted, such as `[e:follow|serve]`. diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/3.go.md b/docs-2.0/3.ngql-guide/7.general-query-statements/3.go.md index 5227625bd0f..2217bced8e9 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/3.go.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/3.go.md @@ -47,9 +47,10 @@ YIELD [DISTINCT] !!! note - There are some restrictions for the `WHERE` clause when you traverse along with multiple edge types. For example, `WHERE edge1.prop1 > edge2.prop2` is not supported. + - There are some restrictions for the `WHERE` clause when you traverse along with multiple edge types. For example, `WHERE edge1.prop1 > edge2.prop2` is not supported. + - The GO statement is executed by traversing all the vertices and then filtering according to the filter condition. -- `YIELD [DISTINCT] `: defines the output to be returned. It is recommended to use the [Schema function](../6.functions-and-expressions/4.schema.md) to fill in ``. `src(edge)`, `dst(edge)`, `type(edge) )`, `rank(edge)`, etc., are currently supported, while nested functions are not. For more information, see [YIELD](../8.clauses-and-options/yield.md). +- `YIELD [DISTINCT] `: defines the output to be returned. It is recommended to use the [Schema-related functions](../6.functions-and-expressions/4.schema.md) to fill in ``. `src(edge)`, `dst(edge)`, `type(edge) )`, `rank(edge)`, etc., are currently supported, while nested functions are not. For more information, see [YIELD](../8.clauses-and-options/yield.md). - `SAMPLE `: takes samples from the result set. For more information, see [SAMPLE](../8.clauses-and-options/sample.md). diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/5.lookup.md b/docs-2.0/3.ngql-guide/7.general-query-statements/5.lookup.md index c0fed0df54a..c601d942f93 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/5.lookup.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/5.lookup.md @@ -54,10 +54,12 @@ YIELD [AS ]; The `WHERE` clause in a `LOOKUP` statement does not support the following operations: - `$-` and `$^`. +- Filter `rank()`. - In relational expressions, operators are not supported to have field names on both sides, such as `tagName.prop1> tagName.prop2`. - Nested AliasProp expressions in operation expressions and function expressions are not supported. - The `XOR` operation is not supported. - +- String operations other than `STARTS WITH` are not supported. +- Graph patterns. ## Retrieve vertices diff --git a/docs-2.0/3.ngql-guide/7.general-query-statements/optional-match.md b/docs-2.0/3.ngql-guide/7.general-query-statements/optional-match.md index 69885a3a402..5dca761e716 100644 --- a/docs-2.0/3.ngql-guide/7.general-query-statements/optional-match.md +++ b/docs-2.0/3.ngql-guide/7.general-query-statements/optional-match.md @@ -1,5 +1,9 @@ # OPTIONAL MATCH +!!! caution + + The feature is still in beta. It will continue to be optimized. + The `OPTIONAL MATCH` clause is used to search for the pattern described in it. `OPTIONAL MATCH` matches patterns against your graph database, just like `MATCH` does. The difference is that if no matches are found, `OPTIONAL MATCH` will use a null for missing parts of the pattern. ## OpenCypher Compatibility @@ -36,4 +40,4 @@ nebula> MATCH (m)-[]->(n) WHERE id(m)=="player100" \ | "player100" | "player125" | "team204" | | "player100" | "player125" | "player100" | +-------------+-------------+-------------+ -``` \ No newline at end of file +``` diff --git a/docs-2.0/3.ngql-guide/8.clauses-and-options/group-by.md b/docs-2.0/3.ngql-guide/8.clauses-and-options/group-by.md index 12b0c9034d9..b5d28b3e8b0 100644 --- a/docs-2.0/3.ngql-guide/8.clauses-and-options/group-by.md +++ b/docs-2.0/3.ngql-guide/8.clauses-and-options/group-by.md @@ -59,8 +59,6 @@ nebula> GO FROM "player100" OVER follow BIDIRECT \ +---------------------+------------+ ``` -## Group and calculate with functions - The following statement finds all the vertices connected directly to vertex `"player100"`, groups the result set by source vertices, and returns the sum of degree values. ```ngql @@ -76,3 +74,27 @@ nebula> GO FROM "player100" OVER follow \ ``` For more information about the `sum()` function, see [Built-in math functions](../6.functions-and-expressions/1.math.md). + + +## Implicit GROUP BY + +The usage of `GROUP BY` in the above nGQL statements that explicitly write `GROUP BY` and act as grouping fields is called explicit `GROUP BY`, while in openCypher, the `GROUP BY` is implicit, i.e., `GROUP BY` groups fields without explicitly writing `GROUP BY`. The explicit `GROUP BY` in nGQL is the same as the implicit `GROUP BY` in openCypher, and nGQL also supports the implicit `GROUP BY`. For the implicit usage of `GROUP BY`, see [how-to-make-group-by-in-a-cypher-query](https://stackoverflow.com/questions/52722671/how-to-make-group-by-in-a-cypher-query). + + +For example, to look up the players over 34 years old with the same length of service, you can use the following statement: + +```ngql +nebula> LOOKUP ON player WHERE player.age > 34 YIELD id(vertex) AS v | \ + GO FROM $-.v OVER serve YIELD serve.start_year AS start_year, serve.end_year AS end_year | \ + YIELD $-.start_year, $-.end_year, count(*) AS count | \ + ORDER BY $-.count DESC | LIMIT 5; ++---------------+-------------+-------+ +| $-.start_year | $-.end_year | count | ++---------------+-------------+-------+ +| 2018 | 2019 | 3 | +| 1998 | 2004 | 2 | +| 2012 | 2013 | 2 | +| 2007 | 2012 | 2 | +| 2010 | 2011 | 2 | ++---------------+-------------+-------+ +``` \ No newline at end of file diff --git a/docs-2.0/4.deployment-and-installation/1.resource-preparations.md b/docs-2.0/4.deployment-and-installation/1.resource-preparations.md index ed17bde7890..6a922118597 100644 --- a/docs-2.0/4.deployment-and-installation/1.resource-preparations.md +++ b/docs-2.0/4.deployment-and-installation/1.resource-preparations.md @@ -19,11 +19,20 @@ NebulaGraph is designed and implemented for NVMe SSD. All default parameters are - Use local SSD devices, or AWS Provisioned IOPS SSD equivalence. ## About CPU architecture + +{{ ent.ent_begin }} +!!! enterpriseonly + + You can run NebulaGraph Enterprise Edition on ARM, including Apple Mac M1 and Huawei Kunpeng. [Contact us](https://nebula-graph.com.cn/pricing/) for details. + +{{ ent.ent_end }} !!! note Starting with 3.0.2, you can run containerized NebulaGraph databases on Docker Desktop for ARM macOS or on ARM Linux servers. + + ## Requirements for compiling the source code ### Hardware requirements for compiling NebulaGraph @@ -191,7 +200,7 @@ For a more common test environment, such as a cluster of 3 machines (named as A, | CPU architecture | x86_64 | | Number of CPU core | 48 | | Memory | 256 GB | -| Disk | 1TB, NVMe SSD | +| Disk | 2 * 1.6 TB, NVMe SSD | ### Supported operating systems for production environments @@ -203,7 +212,7 @@ Users can adjust some of the kernel parameters to better accommodate the need fo !!! danger - **DO NOT** deploy a cluster across IDCs. + **DO NOT** deploy a single cluster across IDCs (The Enterprise Edtion supports data synchronization between clusters across IDCs). | Process | Suggested number | | ----------------------------------------- | ---------------- | diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md index 6c6316c4e6d..71193bf3bce 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/1.install-nebula-graph-by-compiling-the-source-code.md @@ -82,7 +82,9 @@ The source code of the master branch changes frequently. If the corresponding Ne ## Next to do +{{ ent.ent_begin }} - (Enterprise Edition)[Deploy license](../deploy-license.md) +{{ ent.ent_end }} - [Manage NebulaGraph services](../../2.quick-start/5.start-stop-service.md) diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md index eef8666d55a..86b1c840293 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md @@ -18,7 +18,7 @@ Using Docker Compose can quickly deploy NebulaGraph services based on the prepar * If you have already deployed another version of NebulaGraph with Docker Compose on your host, to avoid compatibility issues, you need to delete the `nebula-docker-compose/data` directory. -## How to deploy and connect to NebulaGraph +## Deploy NebulaGraph 1. Clone the `{{dockercompose.release}}` branch of the `nebula-docker-compose` repository to your host with Git. @@ -42,7 +42,6 @@ Using Docker Compose can quickly deploy NebulaGraph services based on the prepar 3. Run the following command to start all the NebulaGraph services. - Starting with 3.0.2, NebulaGraph comes with ARM64 Linux Docker images. You can run containerized NebulaGraph databases on Docker Desktop for ARM macOS or on ARM Linux servers. !!! Note @@ -50,74 +49,84 @@ Using Docker Compose can quickly deploy NebulaGraph services based on the prepar ```bash [nebula-docker-compose]$ docker-compose up -d - Creating nebula-docker-compose_metad0_1 ... done - Creating nebula-docker-compose_metad2_1 ... done - Creating nebula-docker-compose_metad1_1 ... done - Creating nebula-docker-compose_graphd2_1 ... done - Creating nebula-docker-compose_graphd_1 ... done - Creating nebula-docker-compose_graphd1_1 ... done - Creating nebula-docker-compose_storaged0_1 ... done - Creating nebula-docker-compose_storaged2_1 ... done - Creating nebula-docker-compose_storaged1_1 ... done + Creating nebuladockercompose_metad0_1 ... done + Creating nebuladockercompose_metad2_1 ... done + Creating nebuladockercompose_metad1_1 ... done + Creating nebuladockercompose_graphd2_1 ... done + Creating nebuladockercompose_graphd_1 ... done + Creating nebuladockercompose_graphd1_1 ... done + Creating nebuladockercompose_storaged0_1 ... done + Creating nebuladockercompose_storaged2_1 ... done + Creating nebuladockercompose_storaged1_1 ... done ``` - !!! Note + !!! compatibility + + Starting from NebulaGraph version 3.1.0, nebula-docker-compose automatically starts a NebulaGraph Console docker container and adds the storage host to the cluster (i.e. `ADD HOSTS` command). + + !!! note For more information of the preceding services, see [NebulaGraph architecture](../../1.introduction/3.nebula-graph-architecture/1.architecture-overview.md). -4. Connect to NebulaGraph. +## Connect to NebulaGraph - !!! Note - - Starting from NebulaGraph version 3.1.0, nebula-docker-compose automatically starts a NebulaGraph Console docker container and adds the storage host to the cluster (i.e. `ADD HOSTS` command). +There are two ways to connect to NebulaGraph: + +- Connected with Nebula Console outside the container. Because the external mapping port for the Graph service is also fixed as `9669` in the container's configuration file, you can connect directly through the default port. For details, see [Connect to NebulaGraph](../../2.quick-start/3.connect-to-nebula-graph.md). - 1. Run the following command to view the name of NebulaGraph Console docker container. +- Log into the container installed NebulaGraph Console, then connect to the Graph service. This section describes this approach. - ```bash - $ docker-compose ps - Name Command State Ports - ---------------------------------------------------------------------------------------------- - nebuladockercompose_console_1 sh -c sleep 3 && Up - nebula-co ... - ...... - ``` +1. Run the following command to view the name of NebulaGraph Console docker container. - 2. Run the following command to enter the NebulaGraph Console docker container. + ```bash + $ docker-compose ps + Name Command State Ports + -------------------------------------------------------------------------------------------- + nebuladockercompose_console_1 sh -c sleep 3 && Up + nebula-co ... + ...... + ``` - ```bash - docker exec -it nebuladockercompose_console_1 /bin/sh - / # - ``` +2. Run the following command to enter the NebulaGraph Console docker container. - 3. Connect to NebulaGraph with NebulaGraph Console. + ```bash + docker exec -it nebuladockercompose_console_1 /bin/sh + / # + ``` - ```bash - / # ./usr/local/bin/nebula-console -u -p --address=graphd --port=9669 - ``` +3. Connect to NebulaGraph with NebulaGraph Console. - !!! Note + ```bash + / # ./usr/local/bin/nebula-console -u -p --address=graphd --port=9669 + ``` + + !!! Note By default, the authentication is off, you can only log in with an existing username (the default is `root`) and any password. To turn it on, see [Enable authentication](../../7.data-security/1.authentication/1.authentication.md). - 4. Run the following commands to view the cluster state. +4. Run the following commands to view the cluster state. - ```bash - nebula> SHOW HOSTS; - +-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+ - | Host | Port | HTTP port | Status | Leader count | Leader distribution | Partition distribution | Version | - +-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+ - | "storaged0" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.1.0" | - | "storaged1" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.1.0" | - | "storaged2" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.1.0" | - +-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+ - ``` + ```bash + nebula> SHOW HOSTS; + +-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+ + | Host | Port | HTTP port | Status | Leader count | Leader distribution | Partition distribution | Version | + +-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+ + | "storaged0" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "x.x.x" | + | "storaged1" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "x.x.x" | + | "storaged2" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "x.x.x" | + +-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+ + ``` -5. Run `exit` twice to switch back to your terminal (shell). +Run `exit` twice to switch back to your terminal (shell). ## Check the NebulaGraph service status and ports Run `docker-compose ps` to list all the services of NebulaGraph and their status and ports. +!!! note + + NebulaGraph provides services to the clients through port `9669` by default. To use other ports, modify the `docker-compose.yaml` file in the `nebula-docker-compose` directory and restart the NebulaGraph services. + ```bash $ docker-compose ps nebuladockercompose_console_1 sh -c sleep 3 && Up @@ -133,7 +142,30 @@ nebuladockercompose_storaged1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0 nebuladockercompose_storaged2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49167->19779/tcp,:::49167->19779/tcp, 0.0.0.0:49164->19780/tcp,:::49164->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49170->9779/tcp,:::49170->9779/tcp, 9780/tcp ``` -NebulaGraph provides services to the clients through port `9669` by default. To use other ports, modify the `docker-compose.yaml` file in the `nebula-docker-compose` directory and restart the NebulaGraph services. +If the service is abnormal, you can first confirm the abnormal container name (such as `nebuladockercompose_graphd2_1`). + +Then you can execute `docker ps` to view the corresponding `CONTAINER ID` (such as `2a6c56c405f5`). + +```bash +[nebula-docker-compose]$ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +2a6c56c405f5 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49230->9669/tcp, 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp nebuladockercompose_graphd2_1 +7042e0a8e83d vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49227->9779/tcp, 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp nebuladockercompose_storaged2_1 +18e3ea63ad65 vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49219->9779/tcp, 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp nebuladockercompose_storaged0_1 +4dcabfe8677a vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49224->9669/tcp, 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp nebuladockercompose_graphd1_1 +a74054c6ae25 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:9669->9669/tcp, 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp nebuladockercompose_graphd_1 +880025a3858c vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49216->9779/tcp, 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp nebuladockercompose_storaged1_1 +45736a32a23a vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49213->9559/tcp, 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp nebuladockercompose_metad0_1 +3b2c90eb073e vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49207->9559/tcp, 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp nebuladockercompose_metad2_1 +7bb31b7a5b3f vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49210->9559/tcp, 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp nebuladockercompose_metad1_1 +``` + +Use the `CONTAINER ID` to log in the container and troubleshoot. + +```bash +nebula-docker-compose]$ docker exec -it 2a6c56c405f5 bash +[root@2a6c56c405f5 nebula]# +``` ## Check the service data and logs diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md index 02931cbea41..35985eb7fae 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/4.install-nebula-graph-from-tar.md @@ -4,7 +4,10 @@ You can install NebulaGraph by downloading the tar.gz file. !!! note - NebulaGraph provides installing with the tar.gz file starting from version 2.6.0. + - NebulaGraph provides installing with the tar.gz file starting from version 2.6.0. + + - NebulaGraph is currently only supported for installation on Linux systems, and only CentOS 7.x, CentOS 8.x, Ubuntu 16.04, Ubuntu 18.04, and Ubuntu 20.04 operating systems are supported. + ## Installation steps diff --git a/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md b/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md index b7cf05810e6..db57efc7f95 100644 --- a/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md +++ b/docs-2.0/4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md @@ -161,6 +161,11 @@ All NebulaGraph clients in use must be switched to the old version. A: No. You must stop all NebulaGraph services during the upgrade. + +### The `Space 0 not found` warning message during the upgrade process + +When the `Space 0 not found` warning message appears during the upgrade process, you can ignore it. The space `0` is used to store meta information about the Storage service and does not contain user data, so it will not affect the upgrade. + ### How to upgrade if a machine has only the Graph Service, but not the Storage Service? A: You only need to update the configuration files and binaries of the Graph Service. diff --git a/docs-2.0/README.md b/docs-2.0/README.md index e901b62effb..148f0dc0198 100644 --- a/docs-2.0/README.md +++ b/docs-2.0/README.md @@ -26,11 +26,21 @@ NebulaGraph is a distributed, scalable, and lightning-fast graph database. It is * [FAQ](20.appendix/0.FAQ.md) * [Ecosystem Tools](20.appendix/6.eco-tool-version.md) + +## Release notes + +- [NebulaGraph Community Edition {{ nebula.release }}](20.appendix/release-notes/nebula-comm-release-note.md) + +- [NebulaGraph Studio](20.appendix/release-notes/studio-release-note.md) +- [NebulaGraph Explorer](20.appendix/release-notes/explorer-release-note.md) +- [NebulaGraph Dashboard Community Edition](20.appendix/release-notes/dashboard-comm-release-note.md) +- [NebulaGraph Dashboard Enterprise Edition](20.appendix/release-notes/dashboard-ent-release-note.md) + + ## Other Sources - [To cite NebulaGraph](https://arxiv.org/abs/2206.07278) - [NebulaGraph Homepage](https://nebula-graph.io/) -- [Release notes](20.appendix/release-notes/nebula-comm-release-note.md) - [Forum](https://discuss.nebula-graph.io/) - [Blogs](https://nebula-graph.io/posts/) - [Videos](https://www.youtube.com/channel/UC73V8q795eSEMxDX4Pvdwmw) diff --git a/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md b/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md index 7d338f647b2..762125f8b49 100644 --- a/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md +++ b/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md @@ -19,11 +19,10 @@ The BR has the following features. It supports: - Supports full backup, but not incremental backup. - Currently, NebulaGraph Listener and full-text indexes do not support backup. - If you back up data to the local disk, the backup files will be saved in the local path of each server. You can also mount the NFS on your host to restore the backup data to a different host. -- The backup graph space can be restored to the original cluster only. Cross clusters restoration is not supported. - During the backup process, both DDL and DML statements in the specified graph spaces are blocked. We recommend that you do the operation within the low peak period of the business, for example, from 2:00 AM to 5:00 AM. -- Restoration requires that the number of the storage servers in the original cluster is the same as that of the storage servers in the target cluster and storage server IPs must be the same. +- The backup graph space can be restored to the original cluster only. Cross clusters restoration is not supported. Make sure the number of hosts in the cluster is not changed. Restoring a specified graph space will delete all other graph spaces in the cluster. +- Restoration requires that the number of the storage servers in the original cluster is the same as that of the storage servers in the target cluster and storage server IPs must be the same. Restoring the specified space will clear all the remaining spaces in the cluster. - During the restoration process, there is a time when NebulaGraph stops running. -- If you back up data of a specified graph space in cluster A and restore the graph space data to cluster B, the data of other graph spaces in cluster B will be deleted. - Using BR in a container-based NebulaGraph cluster is not supported. diff --git a/docs-2.0/backup-and-restore/nebula-br/2.compile-br.md b/docs-2.0/backup-and-restore/nebula-br/2.compile-br.md index f6c36363c54..7bd8b731f12 100644 --- a/docs-2.0/backup-and-restore/nebula-br/2.compile-br.md +++ b/docs-2.0/backup-and-restore/nebula-br/2.compile-br.md @@ -2,6 +2,19 @@ This topic introduces how to install BR. + +## Notes + +To use the BR (Enterprise Edition) tool, you need to install the NebulaGraph Agent service, which is taken as a daemon for each machine in the cluster that starts and stops the NebulaGraph service, and uploads and downloads backup files. The BR (Enterprise Edition) tool and the Agent plug-in are installed as described below. + + +## Version compatibility + +|NebulaGraph|BR |Agent | +|:---|:---|:---| +|3.3.0|3.3.0|0.2.0| +|3.0.x ~ 3.2.x|0.6.1|0.1.0 ~ 0.2.0| + ## Install BR with a binary file 1. Install BR. @@ -64,3 +77,66 @@ Users can enter `bin/br version` on the command line. If the following results a [nebula-br]$ bin/br version NebulaGraph Backup And Restore Utility Tool,V-{{br.release}} ``` + +## Install Agent + +NebulaGraph Agent is installed as a binary file in each machine and serves the BR tool with the RPC protocol. + +In **each machine**, follow these steps: + +1. Install Agent. + + ``` + wget https://github.com/vesoft-inc/nebula-agent/releases/download/v{{agent.release}}/agent-{{agent.release}}-linux-amd64 + ``` + +2. Rename the Agent file to `agent`. + + ``` + sudo mv agent-{{agent.release}}-linux-amd64 agent + ``` + +3. Add execute permission to Agent. + + ``` + sudo chmod +x agent + ``` + +4. Start Agent. + + !!! note + + Before starting Agent, make sure that the Meta service has been started and Agent has read and write access to the corresponding NebulaGraph cluster directory and backup directory. + + ``` + sudo nohup ./nebula_agent --agent=":8888" --meta=":9559" > nebula_agent.log 2>&1 & + ``` + + - `--agent`: The IP address and port number of Agent. + - `--meta`: The IP address and access port of any Meta service in the cluster. + - `--ratelimit`: (Optional) Limits the speed of file uploads and downloads to prevent bandwidth from being filled up and making other services unavailable. Unit: Bytes. + + For example: + + ``` + sudo nohup ./nebula_agent --agent="192.168.8.129:8888" --meta="192.168.8.129:9559" --ratelimit=1048576 > nebula_agent.log 2>&1 & + ``` + !!! caution + + The IP address format for `--agent`should be the same as that of Meta and Storage services set in the [configuration files](../../5.configurations-and-logs/1.configurations/1.configurations.md). That is, use the real IP addresses or use `127.0.0.1`. Otherwise Agent does not run. + +1. Log into NebulaGraph and then run the following command to view the status of Agent. + + ``` + nebula> SHOW HOSTS AGENT; + +-----------------+------+----------+---------+--------------+---------+ + | Host | Port | Status | Role | Git Info Sha | Version | + +-----------------+------+----------+---------+--------------+---------+ + | "192.168.8.129" | 8888 | "ONLINE" | "AGENT" | "96646b8" | | + +-----------------+------+----------+---------+--------------+---------+ + ``` + +## FAQ + +### The error `E_LIST_CLUSTER_NO_AGENT_FAILURE +If you encounter `E_LIST_CLUSTER_NO_AGENT_FAILURE` error, it may be due to the Agent service is not started or the Agent service is not registered to Meta service. First, execute `SHOW HOSTS AGENT` to check the status of the Agent service on all nodes in the cluster, when the status shows `OFFLINE`, it means the registration of Agent failed, then check whether the value of the `--meta` option in the command to start the Agent service is correct. diff --git a/docs-2.0/backup-and-restore/nebula-br/3.br-backup-data.md b/docs-2.0/backup-and-restore/nebula-br/3.br-backup-data.md index f2b4578e5b8..bbbdbe39ce1 100644 --- a/docs-2.0/backup-and-restore/nebula-br/3.br-backup-data.md +++ b/docs-2.0/backup-and-restore/nebula-br/3.br-backup-data.md @@ -1,17 +1,15 @@ # Use BR to back up data -After the BR is compiled, you can back up data of the entire graph space. This topic introduces how to use the BR to back up data. +After the BR is installed, you can back up data of the entire graph space. This topic introduces how to use the BR to back up data. ## Prerequisites To back up data with the BR, do a check of these: -- The BR is compiled. For more information, see [Compile BR](2.compile-br.md). +- [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster. - The NebulaGraph services are running. -- The [nebula-agent](https://github.com/vesoft-inc/nebula-agent) has been downloaded and the nebula-agent service is running on each host in the cluster. - - If you store the backup files locally, create a directory with the same absolute path on the meta servers, the storage servers, and the BR machine for the backup files and get the absolute path. Make sure the account has write privileges for this directory. !!! note @@ -20,19 +18,19 @@ To back up data with the BR, do a check of these: ## Procedure -Run the following command to perform a full backup for the entire cluster. +In the BR installation directory (the default path of the compiled BR is `./bin/br`), run the following command to perform a full backup for the entire cluster. !!! Note Make sure that the local path where the backup file is stored exists. ```bash -$ ./bin/br backup full --meta --storage +$ ./br backup full --meta --storage ``` For example: -- Run the following command to perform a full backup for the entire cluster whose meta service address is `127.0.0.1:9559`, and save the backup file to `/home/nebula/backup/`. +- Run the following command to perform a full backup for the entire cluster whose meta service address is `192.168.8.129:9559`, and save the backup file to `/home/nebula/backup/`. !!! caution @@ -43,13 +41,13 @@ For example: If you back up data to a local disk, only the data of the leader metad is backed up by default. So if there are multiple metad processes, you need to manually copy the directory of the leader metad (path `/meta`) and overwrite the corresponding directory of other follower meatd processes. ```bash - $ ./bin/br backup full --meta "127.0.0.1:9559" --storage "local:///home/nebula/backup/" + $ ./br backup full --meta "192.168.8.129:9559" --storage "local:///home/nebula/backup/" ``` -- Run the following command to perform a full backup for the entire cluster whose meta service address is `127.0.0.1:9559`, and save the backup file to `backup` in the `br-test` bucket of the object storage service compatible with S3 protocol. +- Run the following command to perform a full backup for the entire cluster whose meta service address is `192.168.8.129:9559`, and save the backup file to `backup` in the `br-test` bucket of the object storage service compatible with S3 protocol. ```bash - $ ./bin/br backup full --meta "127.0.0.1:9559" --s3.endpoint "http://127.0.0.1:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region=default + $ ./br backup full --meta "192.168.8.129:9559" --s3.endpoint "http://192.168.8.129:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region=default ``` The parameters are as follows. @@ -57,10 +55,10 @@ The parameters are as follows. | Parameter | Data type | Required | Default value | Description | | --- | --- | --- | --- | --- | | `-h,-help` | - | No | None | Checks help for restoration. | -| `-debug` | - | No | None | Checks for more log information. | -| `-log` | string | No | `"br.log"` | Specifies detailed log path for restoration and backup. | -| `-meta` | string | Yes | None | The IP address and port of the meta service. | -| `-name` | string | Yes | None | The name of backup. | +| `--debug` | - | No | None | Checks for more log information. | +| `--log` | string | No | `"br.log"` | Specifies detailed log path for restoration and backup. | +| `--meta` | string | Yes | None | The IP address and port of the meta service. | +| `--space` | string | Yes | None | (Experimental feature) Specifies the names of the spaces to be backed up. All spaces will be backed up if not specified. Multiple spaces can be specified, and format is `--spaces nba_01 --spaces nba_02`.| | `--storage` | string | Yes | None | The target storage URL of BR backup data. The format is: \://\.
Schema: Optional values are `local` and `s3`.
When selecting s3, you need to fill in `s3.access_key`, `s3.endpoint`, `s3.region`, and `s3.secret_key`.
PATH: The path of the storage location. | | `--s3.access_key` | string | No | None | Sets AccessKey ID. | | `--s3.endpoint` | string | No | None | Sets the S3 endpoint URL, please specify the HTTP or HTTPS scheme explicitly. | diff --git a/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md b/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md index eed0d70fe75..86e40e8bc8f 100644 --- a/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md +++ b/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md @@ -14,7 +14,7 @@ If you use the BR to back up data, you can use it to restore the data to NebulaG To restore data with the BR, do a check of these: -- The BR is compiled. For more information, see [Compile BR](2.compile-br.md). +- [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster. - Download [nebula-agent](https://github.com/vesoft-inc/nebula-agent) and start the agent service in each cluster(including metad, storaged, graphd) host. @@ -24,14 +24,16 @@ To restore data with the BR, do a check of these: ## Procedures +In the BR installation directory (the default path of the compiled BR is `./br`), run the following command to perform a full backup for the entire cluster. + 1. Users can use the following command to list the existing backup information: ```bash - $ ./bin/br show --storage + $ ./br show --storage ``` For example, run the following command to list the backup information in the local `/home/nebula/backup` path. ```bash - $ ./bin/br show --storage "local:///home/nebula/backup" + $ ./br show --storage "local:///home/nebula/backup" +----------------------------+---------------------+------------------------+-------------+------------+ | NAME | CREATE TIME | SPACES | FULL BACKUP | ALL SPACES | +----------------------------+---------------------+------------------------+-------------+------------+ @@ -40,9 +42,9 @@ To restore data with the BR, do a check of these: +----------------------------+---------------------+------------------------+-------------+------------+ ``` - Or, you can run the following command to list the backup information stored in S3 URL `s3://127.0.0.1:9000/br-test/backup`. + Or, you can run the following command to list the backup information stored in S3 URL `s3://192.168.8.129:9000/br-test/backup`. ```bash - $ ./bin/br show --s3.endpoint "http://127.0.0.1:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region=default + $ ./br show --s3.endpoint "http://192.168.8.129:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region=default ``` | Parameter | Data type | Required | Default value | Description | @@ -60,18 +62,18 @@ To restore data with the BR, do a check of these: 2. Run the following command to restore data. ``` - $ ./bin/br restore full --meta --storage --name + $ ./br restore full --meta --storage --name ``` - For example, run the following command to upload the backup files from the local `/home/nebula/backup/` to the cluster where the meta service's address is `127.0.0.1:9559`. + For example, run the following command to upload the backup files from the local `/home/nebula/backup/` to the cluster where the meta service's address is `192.168.8.129:9559`. ``` - $ ./bin/br restore full --meta "127.0.0.1:9559" --storage "local:///home/nebula/backup/" --name BACKUP_2021_12_08_18_38_08 + $ ./br restore full --meta "192.168.8.129:9559" --storage "local:///home/nebula/backup/" --name BACKUP_2021_12_08_18_38_08 ``` - Or, you can run the following command to upload the backup files from the S3 URL `s3://127.0.0.1:9000/br-test/backup`. + Or, you can run the following command to upload the backup files from the S3 URL `s3://192.168.8.129:9000/br-test/backup`. ```bash - $ ./bin/br restore full --meta "127.0.0.1:9559" --s3.endpoint "http://127.0.0.1:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region="default" --name BACKUP_2021_12_08_18_38_08 + $ ./br restore full --meta "192.168.8.129:9559" --s3.endpoint "http://192.168.8.129:9000" --storage="s3://br-test/backup/" --s3.access_key=minioadmin --s3.secret_key=minioadmin --s3.region="default" --name BACKUP_2021_12_08_18_38_08 ``` If the following information is returned, the data is restored successfully. @@ -101,7 +103,7 @@ To restore data with the BR, do a check of these: 3. Run the following command to clean up temporary files if any error occurred during backup. It will clean the files in cluster and external storage. You could also use it to clean up old backups files in external storage. ```bash - $ ./bin/br cleanup --meta --storage --name + $ ./br cleanup --meta --storage --name ``` The parameters are as follows. diff --git a/docs-2.0/graph-computing/nebula-algorithm.md b/docs-2.0/graph-computing/nebula-algorithm.md index 74c27fc1afc..372955722b8 100644 --- a/docs-2.0/graph-computing/nebula-algorithm.md +++ b/docs-2.0/graph-computing/nebula-algorithm.md @@ -131,9 +131,6 @@ The `lib` repository provides 10 common graph algorithms. ### Submit the algorithm package directly -!!! note - There are limitations to use sealed packages. For example, when sinking a repository into NebulaGraph, the property name of the tag created in the sunk graph space must match the preset name in the code. The first method is recommended if the user has development skills. - 1. Set the [Configuration file](https://github.com/vesoft-inc/nebula-algorithm/blob/{{algorithm.branch}}/nebula-algorithm/src/main/resources/application.conf). ```bash @@ -253,6 +250,10 @@ The `lib` repository provides 10 common graph algorithms. } ``` + !!! note + + When `sink: nebula` is configured, it means that the algorithm results will be written back to the NebulaGraph cluster. The property names of the tag have implicit conventions. For details, see **Supported algorithms** section of this topic. + 2. Submit the graph computing task. ```bash diff --git a/docs-2.0/graph-computing/nebula-analytics.md b/docs-2.0/graph-computing/nebula-analytics.md index 3bc670f6e76..0d0e9a64de5 100644 --- a/docs-2.0/graph-computing/nebula-analytics.md +++ b/docs-2.0/graph-computing/nebula-analytics.md @@ -5,8 +5,13 @@ NebulaGraph Analytics is a high-performance graph computing framework tool that ## Prerequisites - The NebulaGraph Analytics installation package has been obtained. [Contact us](https://www.nebula-graph.io/contact) to apply. + - The [license](analytics-ent-license.md) is ready. +- The [HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ClusterSetup.html) 2.2.x or later has been deployed. + +- The JDK 1.8 has been deployed. + ## Scenarios You can import data from data sources as NebulaGraph clusters, CSV files on HDFS, or local CSV files into NebulaGraph Analytics and export the graph computation results to NebulaGraph clusters, CSV files on HDFS, or local CSV files from NebulaGraph Analytics. @@ -57,13 +62,28 @@ NebulaGraph Analytics supports the following graph algorithms. ## Install NebulaGraph Analytics -1. When installing a cluster of multiple NebulaGraph Analytics on multiple nodes, you need to install NebulaGraph Analytics to the same path and set up SSH-free login between nodes. +1. Install the NebulaGraph Analytics. - ```bash - sudo rpm -i nebula-analytics-{{plato.release}}-centos.x86_64.rpm --prefix /home/xxx/nebula-analytics + ``` + sudo rpm -ivh --prefix + sudo chown : -R + ``` + + For example: + + ``` + sudo rpm -ivh nebula-analytics-{{plato.release}}-centos.x86_64.rpm --prefix=/home/vesoft/nebula-analytics + sudo chown vesoft:vesoft -R /home/vesoft/nebula-analytics + ``` + +2. Configure the correct Hadoop path and JDK path in the file `set_env.sh`, the file path is `nebula-analytics/scripts/set_env.sh`. If there are multiple machines, ensure that the paths are the same. + + ``` + export HADOOP_HOME= + export JAVA_HOME= ``` -2. Copy the license into the directory `scripts` of the NebulaGraph Analytics installation path on all machines. +3. Copy the license into the directory `scripts` of the NebulaGraph Analytics installation path on all machines. -For more information about the preceding statements, see[User management](../../7.data-security/1.authentication/2.management-user.md) +For more information about the preceding statements, see [User management](../../7.data-security/1.authentication/2.management-user.md). +--> ## Browser diff --git a/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md b/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md index 95634efbb9a..a6b786c03a2 100644 --- a/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md +++ b/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md @@ -8,10 +8,7 @@ NebulaGraph Studio (Studio in short) is a browser-based visualization tool to ma ## Released versions -You can deploy Studio using the following methods: - -- You can deploy Studio with Docker, RPM-based, Tar-based or DEB-based and connect it to NebulaGraph. For more information, see [Deploy Studio](../deploy-connect/st-ug-deploy.md). -- Helm-based. You can deploy Studio with Helm in the Kubernetes cluster and connect it to NebulaGraph. For more information, see [Helm-based Studio](../deploy-connect/st-ug-deploy-by-helm.md). +In addition to deploying Studio with RPM-based, DEB-based, or Tar-based package, or with Docker. You can also deploy Studio with Helm in the Kubernetes cluster. For more information, see [Deploy Studio](../deploy-connect/st-ug-deploy.md). diff --git a/docs-2.0/stylesheets/extra.css b/docs-2.0/stylesheets/extra.css index 7680955d283..02d32b686cc 100644 --- a/docs-2.0/stylesheets/extra.css +++ b/docs-2.0/stylesheets/extra.css @@ -1,6 +1,6 @@ .md-grid { - max-width: initial; - } + max-width: initial; +} /* nebula dark */ :root{ @@ -16,4 +16,4 @@ --md-code-fg-color: rgb(12, 21, 26); --md-code-bg-color: #eaebec; --md-typeset-color: #000000; -} \ No newline at end of file +} diff --git a/docs-2.0/synchronization-and-migration/2.balance-syntax.md b/docs-2.0/synchronization-and-migration/2.balance-syntax.md index 8f7b0469c27..7aa821d0641 100644 --- a/docs-2.0/synchronization-and-migration/2.balance-syntax.md +++ b/docs-2.0/synchronization-and-migration/2.balance-syntax.md @@ -6,10 +6,10 @@ The `BALANCE` statements are listed as follows. |Syntax|Description| |:---|:---| -|`BALANCE DATA`| Starts a job to balance the distribution of storage partitions in the current graph space. It returns the job ID. | -|`BALANCE DATA REMOVE [,: ...]`| Migrate the partitions in the specified storage host to other storage hosts in the current graph space. | |`BALANCE LEADER`| Starts a job to balance the distribution of storage leaders in the current graph space. It returns the job ID. | " when releasing core-ent - ent_end: --> # change to "" when releasing core-ent + ent_begin: # change to "" when releasing core-ent + ent_end: # change to "" when releasing core-ent nav: - About: README.md