Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get-started fix minors & add upsert #12

Merged
merged 2 commits into from
May 9, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 35 additions & 18 deletions docs/manual-CN/1.overview/2.quick-start/1.get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

- 安装
- 数据建模
- CRUD 操作
- 增删改查
- 批量插入
- 数据导入工具

Expand Down Expand Up @@ -34,13 +34,14 @@
0. 检查集群机器状态:

```ngql
nebula> SHOW HOSTS
nebula> SHOW HOSTS;
================================================================================================
| Ip | Port | Status | Leader count | Leader distribution | Partition distribution |
================================================================================================
| 192.168.8.210 | 44500 | online | | | |
------------------------------------------------------------------------------------------------
| 192.168.8.211 | 44500 | online | | | |
------------------------------------------------------------------------------------------------
```

状态 `online` 表示**存储服务进程** `storaged` 已经成功连接上**元数据服务进程** `metad`。
Expand All @@ -59,28 +60,30 @@
你可以通过 `SHOW HOSTS` 命令检查机器和 partition 分布情况:

```ngql
nebula> SHOW HOSTS
nebula> SHOW HOSTS;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good Catch.
My bad

================================================================================================
| Ip | Port | Status | Leader count | Leader distribution | Partition distribution |
================================================================================================
| 192.168.8.210 | 44500 | online | 8 | nba: 8 | test: 8 |
------------------------------------------------------------------------------------------------
| 192.168.8.211 | 44500 | online | 2 | nba: 2 | test: 2 |
------------------------------------------------------------------------------------------------
```

若发现机器都已在线 (online),但 Leader distribution 分布不均(如上),则可以通过命令 (`BALANCE LEADER`) 来触发 partition 重分布:

```ngql
nebula> BALANCE LEADER
nebula> BALANCE LEADER;
================================================================================================
| Ip | Port | Status | Leader count | Leader distribution | Partition distribution |
================================================================================================
| 192.168.8.210 | 44500 | online | 5 | nba: 5 | test: 5 |
------------------------------------------------------------------------------------------------
| 192.168.8.211 | 44500 | online | 5 | nba: 5 | test: 5 |
------------------------------------------------------------------------------------------------
```

具体解释可以见[这里](../../2.query-language/4.statement-syntax/1.data-definition-statements/create-space-syntax.md)
具体解释可以见[这里](../../2.query-language/4.statement-syntax/1.data-definition-statements/create-space-syntax.md)

2. 输入以下语句来指定使用的图空间:

Expand Down Expand Up @@ -136,7 +139,7 @@

5. 现在,您可以查看刚刚创建的标签和边类型。

5.1. 要获取刚创建的标签,请输入以下语句:
5.1 要获取刚创建的标签,请输入以下语句:

```ngql
nebula> SHOW TAGS;
Expand All @@ -154,7 +157,7 @@
------------
```

5.2. 要显示刚创建的边类型,请输入以下语句:
5.2 要显示刚创建的边类型,请输入以下语句:

```ngql
nebula> SHOW EDGES;
Expand All @@ -172,7 +175,7 @@
----------
```

5.3. 要显示 **player** 标签的属性,请输入以下语句:
5.3 要显示 **player** 标签的属性,请输入以下语句:

```ngql
nebula> DESCRIBE TAG player;
Expand All @@ -190,7 +193,7 @@
-------------------
```

5.4. 要获取 **follow** 边类型的属性,请输入以下语句:
5.4 要获取 **follow** 边类型的属性,请输入以下语句:

```ngql
nebula> DESCRIBE EDGE follow;
Expand Down Expand Up @@ -259,7 +262,7 @@ nebula> INSERT EDGE serve(start_year, end_year) VALUES 101 -> 201:(1999, 2018);
**同样的**:如果您想一次批量插入多条同类型的边,可以执行以下语句:

```ngql
INSERT EDGE follow(degree) VALUES 100 -> 101:(95), 100 -> 102:(90), 102 -> 101:(75);
nebula> INSERT EDGE follow(degree) VALUES 100 -> 101:(95), 100 -> 102:(90), 102 -> 101:(75);
```

### 读取数据
Expand Down Expand Up @@ -364,11 +367,23 @@ nebula> FETCH PROP ON follow 100 -> 101;
------------------------------------------------------------
```

<!--
### UPSERT
#### UPSERT

`UPSERT` 用于插入新的顶点或边或更新现有的顶点或边。如果顶点或边不存在,则会新建该顶点或边。`UPSERT` 是 `INSERT` 和 `UPDATE` 的组合。

例如:

TODO
-->
```ngql
nebula> INSERT VERTEX player(name, age) VALUES 111:("Ben Simmons", 22); -- 插入一个新点。
nebula> UPSERT VERTEX 111 SET player.name = "Dwight Howard", player.age = $^.player.age + 11 WHEN $^.player.name == "Ben Simmons" && $^.player.age > 20 YIELD $^.player.name AS Name, $^.player.age AS Age; -- 对该点进行 UPSERT 操作。
=======================
| Name | Age |
=======================
| Dwight Howard | 33 |
-----------------------
```
Amber1990Zhang marked this conversation as resolved.
Show resolved Hide resolved

详情查看 [UPSERT 文档](../../2.query-language/4.statement-syntax/2.data-query-and-manipulation-statements/upsert-syntax.md)。

### 删除数据

Expand Down Expand Up @@ -482,9 +497,11 @@ nebula> GO FROM 100 OVER follow WHERE $$.player.age >= 35 \

**这里**:

- `$^` 表示边的起始点。
- `|` 表示管道。
- `$-` 表示输入流。上一个查询的输出`(id)`作为下一个查询的输入`($-.id)`。
`$^` 表示边的起始点。

`|` 表示管道。

`$-` 表示输入流。上一个查询的输出`(id)`作为下一个查询的输入`($-.id)`。

2. 使用`自定义的变量`来组合两个查询语句

Expand Down Expand Up @@ -539,7 +556,7 @@ vesoft/nebula-console:nightly --addr=<127.0.0.1> --port=<3699>

### 批量导入

如果您要插入数百万条记录,建议使用 [csv 导入工具](../../3.build-develop-and-administration/5.storage-service-administration/data-import/import-csv-file.md) 和 [Spark 导入工具](../../3.build-develop-and-administration/5.storage-service-administration/data-import/spark-writer.md)
如果您要插入数百万条记录,建议使用 [csv 导入工具](../../3.build-develop-and-administration/5.storage-service-administration/data-import/import-csv-file.md) 和 [Spark 导入工具](../../3.build-develop-and-administration/5.storage-service-administration/data-import/spark-writer.md)

### 最后

Expand Down