Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rocksdb-stats #133

Merged
merged 11 commits into from
Aug 24, 2020
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 14 additions & 17 deletions community/README.md
Original file line number Diff line number Diff line change
@@ -1,65 +1,62 @@
## Nebula Graph Community Architecture
# Nebula Graph Community Architecture

The Nebula Graph community is organized as shown below.

![Nebula Community Architecture](./images/structure.png)

### PMC
## PMC

The PMC (Project Management Committee) is the entity that controls and leads the whole Nebula Graph projects.
The PMC (Project Management Committee) is the entity that controls and leads the whole Nebula Graph projects.
PMC members are responsible for voting new Maintainers or Committers and the authority to make all major decisions for Nebula Graph. See [PMC List](./pmc-list.md) for the list of PMC members.


### Maintainer
## Maintainer

Maintainers are the planner and designer of the repository, with the right to merge branches into the master. The appointment is for one year. A Maintainer should:

- Set technical directions, roadmap, and priorities for the repository
- Drive the development forward and ensure newcomers, as well as long-time contributors, have a great experience
- Ensure the overall quality of the repository
- Make sure the overall quality of the repository
AmberMoe marked this conversation as resolved.
Show resolved Hide resolved

#### How to become a Maintainer of a Repository
### How to become a Maintainer of a Repository

- Must be a Committer of the repository
- Nominated by the PMC
- Obtain consensus approval from the PMC

See [Maintainer List](./maintainer-list.md) for the Maintainers of each repository.

### Committer
## Committer

Committers come from those Active Contributors who have made significant contributions to the repository. A Committer has approval permission for code reviews of the repository. See [Committer List](./committer-list.md) for the Committers of each repository.

> Note: Each repository requires at least 2 approvals for each PR to be merged into the master branch.
> **NOTE**: Each repository requires at least 2 approvals for each PR to be merged into the master branch.

#### How to become a Committer of a Repository
### How to become a Committer of a Repository

- Generated from Active Contributors
- Has more than 5 PRs merged to the master branch of the repository within a year
- Self-recommended or Nominated by a Maintainer of the repository or PMC
- Gain majority (1/2) votes from the Decision-Making Group (consists of Maintainers of the repository and the PMC)
- The appointment is for one year


### Active Contributor
## Active Contributor

Active Contributors are continuously active contributors in the community. They can have issues and PRs assigned to them and participate in development. See [Active Contributor List](active-contributor-list.md) for the list of Active Contributors.

#### How to become an Active Contributor of a Repository
### How to become an Active Contributor of a Repository

If you contribute at least 5 PRs to a specific repository within one year, you will become an active contributor automatically.

### Contributor
## Contributor

Anyone who contributes one PR for any repository is a Contributor.

#### How to become a Contributor
### How to become a Contributor

To become a Contributor, you should contribute at least 1 PR for any project in the [vesoft-inc organization](https://github.com/vesoft-inc).

There are various ways of contributing. See [Contributing Guide](../CONTRIBUTING.md) to get started.


#### Contributor List

See [Contributors](./contributor-list.md).
See [Contributors](./contributor-list.md).
4 changes: 2 additions & 2 deletions docs/doc-tools/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ To generate TOC, you should first change directory to the merged file and type t
pandoc -s --toc merged.md -o merged.md
```

**Note**: The default number of section levels is 3 in the table of contents (which means that level-1, 2, and 3 headings will be listed in the contents), use `--toc-depth=NUMBER` to specify that number.
> **NOTE**: The default number of section levels is 3 in the table of contents (which means that level-1, 2, and 3 headings will be listed in the contents), use `--toc-depth=NUMBER` to specify that number.

## Step Three: Generate PDF

Expand All @@ -46,6 +46,6 @@ You can convert the merged markdown file into PDF and print it out for easy-read
pandoc merged.md -o merged.pdf
```

**Note:** Make sure [MiKTeX](https://miktex.org/howto/install-miktex) is installed.
> **NOTE**: Make sure [MiKTeX](https://miktex.org/howto/install-miktex) is installed.

Now you've got your PDF documentation and have fun with **Nebula Graph**.
2 changes: 1 addition & 1 deletion docs/manual-EN/1.overview/0.introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

**Nebula Graph's** goal is to provide reading, writing, and computing with high concurrency, low latency for super large scale graphs. Nebula Graph is an open source project and we are looking forward to working with the community to popularize and promote the graph database.

## Main Features of Nebula Graph
## Primary Features of Nebula Graph

This section describes some of the important characteristics of **Nebula Graph**.

Expand Down
7 changes: 4 additions & 3 deletions docs/manual-EN/1.overview/1.concepts/2.nGQL-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,11 +64,12 @@

- Simple type: **vid**, **double**, **int**, **bool**, **string**, **timestamp**
<!-- **float**,**path**, **year**, **month** (year/month), **date**, **datetime** -->
- **vid** : 64-bit signed integer, representing a vertex ID
- List of simple types, such as **integer[]**, **double[]**, **string[]**
- **vid**: 64-bit signed integer, representing a vertex ID

<!-- - List of simple types, such as **integer[]**, **double[]**, **string[]**
- **Map**: A list of KV pairs. The key must be a **string**, the value must be the same type for the given map
- **Object** (future release??): A list of KV pairs. The key mush be a **string**, the value can be any simple type
- **Tuple List**: *This is only used for return values*. It's composed by both meta data and data (multiple rows). The meta data includes the column names and their types.
- **Tuple List**: *This is only used for return values*. It's composed by both meta data and data (multiple rows). The meta data includes the column names and their types. -->

### Type Conversion

Expand Down
10 changes: 4 additions & 6 deletions docs/manual-EN/1.overview/2.quick-start/1.get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,13 +260,11 @@ nebula> INSERT VERTEX team(name) VALUES 201:("Nuggets");
nebula> INSERT VERTEX player(name, age) VALUES 121:("Useless", 60);
```

**Note**:
- In the above vertices inserted, the number after the keyword `VALUES` is the vertex ID (abbreviated for `VID`, int64). The `VID` must be unique in the space.

1. In the above vertices inserted, the number after the keyword `VALUES` is the vertex ID (abbreviated for `VID`, int64). The `VID` must be unique in the space.
- The last vertex (VID: 121)inserted will be deleted in the [deleting data](#deleting-data) section.

2. The last vertex (VID: 121)inserted will be deleted in the [deleting data](#deleting-data) section.

3. If you want to insert multiple vertices for the same tag by a single `INSERT VERTEX` operation, you can enter the following statement:
- If you want to insert multiple vertices for the same tag by a single `INSERT VERTEX` operation, you can enter the following statement:

```ngql
nebula> INSERT VERTEX player(name, age) VALUES 100:("Tim Duncan", 42), \
Expand Down Expand Up @@ -447,7 +445,7 @@ To delete a **follow** edge between `VID` `100` and `VID` `200`, enter the follo
nebula> DELETE EDGE follow 100 -> 200;
```

**Note**: If you delete a vertex, all the out-going and in-coming edges of this vertex are deleted.
> **NOTE**: If you delete a vertex, all the out-going and in-coming edges of this vertex are deleted.

### Sample Queries

Expand Down
22 changes: 9 additions & 13 deletions docs/manual-EN/1.overview/2.quick-start/4.import-csv-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ $ sudo docker pull vesoft/nebula-console:nightly
$ sudo docker run --rm -ti --network=host vesoft/nebula-console:nightly --addr=127.0.0.1 --port=3699
```

**Note**: You must ensure your IP address and port number are configured correctly.
> **NOTE**: Make sure your IP address and port number are configured correctly.

## Creating the Schema for Vertices and Edges

Expand Down Expand Up @@ -136,11 +136,9 @@ files:

```

**Note**:
- In the above configuration file, you must change the IP address and the port number to yours.

* In the above configuration file, you must change the IP address and the port number to yours.

* You must change the directory of the CSV files to yours, otherwise, [**Nebula Importer**](https://github.com/vesoft-inc/nebula-importer) cannot find the CSV files.
- You must change the directory of the CSV files to yours, otherwise, [**Nebula Importer**](https://github.com/vesoft-inc/nebula-importer) cannot find the CSV files.

## Preparing the CSV Data

Expand Down Expand Up @@ -199,19 +197,17 @@ The data in the `team.csv` file is as follows:
208,Kings
```

**Note**:

* In the **serve** and **follow** CSV files, the first column is the source vertex ID, the second column is the destination vertex ID, and the other columns are consistent with the `config.yaml` file.
- In the **serve** and **follow** CSV files, the first column is the source vertex ID, the second column is the destination vertex ID, and the other columns are consistent with the `config.yaml` file.

* In the **player** and **team** CSV files, the first column is the vertex ID and the other columns are consistent with the `config.yaml` file.
- In the **player** and **team** CSV files, the first column is the vertex ID and the other columns are consistent with the `config.yaml` file.

## Importing the CSV Data

After all the previous four steps are complete, you can import the CSV data with `Docker` or `Go`.

### Importing the CSV Data With Go-importer

Before you import CSV data with `Go-importer`, you must ensure `Go` is installed and the environment variable for `Go` is configured.
Before you import CSV data with `Go-importer`, make sure `Go` is installed and the environment variable for `Go` is configured.

You can import the CSV data by the following steps:

Expand All @@ -227,11 +223,11 @@ $ cd /home/nebula/nebula-importer/cmd
$ go run importer.go --config /home/nebula/config.yaml
```

**Note**: You must change the directory for the `import.go` file and the directory for the `config.yaml` file to yours, otherwise, the importing operation might fail.
> **NOTE**: You must change the directory for the `import.go` file and the directory for the `config.yaml` file to yours, otherwise, the importing operation might fail.

### Importing the CSV Data With Docker

Before you import the CSV data with `Docker`, you must ensure that `Docker` is up and running.
Before you import the CSV data with `Docker`, make sure that `Docker` is running.

You can import the CSV data with `Docker` by the following command:

Expand All @@ -242,4 +238,4 @@ $ sudo docker run --rm -ti --network=host \
--config /home/nebula/config.yaml
```

**Note**: You must change the directory for the `config.yaml` file to yours, otherwise the importing operation might fail.
> **NOTE**: You must change the directory for the `config.yaml` file to yours, otherwise the importing operation might fail.
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This document is to walk you through on how **Nebula Graph** is designed and why

The axis in the above picture shows the different requirements for query latency. Like a traditional database, graph database can be divided into two parts: OLAP and OLTP. OLAP cares more about offline analysis while OLTP prefers online processing. Graph computing framework in OLAP is used to analyse data based on graph structure. And it's similar to OLAP in traditional database. But it has features which are not available in traditional database, one is iterative algorithm based on graph. A typical example is the PageRank algorithm from Google, which obtains the relevance of web pages through constant iterative computing. Another example is the commonly-used LPA algorithm.

Along the axis to right, there comes the graph streaming field, which is the combination of basic computing and streaming computing. A relational network is not a static structure, rather, it constantly changes in the business: be it graph structure or graph properties. Computing in this filed
Along the axis to right, there comes the graph streaming field, which is the combination of basic computing and streaming computing. A relational network is not a static structure, rather, it constantly changes in the business: be it graph structure or graph properties. Computing in this filed
is often triggered by events and its latency is in second.

Right beside the graph streaming is the online response system, whose requirement for latency is extremely high, which should be in millisecond.
Expand All @@ -31,7 +31,7 @@ Here **V** is a set of nodes, aka vertices, **E** is a set of directional edges,

## Nebula Graph Architecture

Designed based on the above features, **Nebula Graph** is an open source, distributed, lightning-fast graph database, it is composed of four components: storage service, meta service, query engine and client.
Designed based on the above features, **Nebula Graph** is an open source, distributed, lightning-fast graph database, it is composed of four components: storage service, meta service, query engine and client.

![meetup1-13](https://user-images.githubusercontent.com/42762957/64231577-9c527c80-cf22-11e9-9044-9a739a22c42a.jpg)

Expand Down Expand Up @@ -69,7 +69,7 @@ Migrating the partitions on an overworked server to other relatively idle server

### Design-Thinking: Meta Service

The binary of the meta service is **nebula-metad**. Here is the list of its main functionalities:
The binary of the meta service is **nebula-metad**. Here is the list of its primary functionalities:

- User management

Expand All @@ -93,15 +93,15 @@ The meta service is stateful, and just like the storage service, it persists dat

**Nebula Graph**'s query language **nGQL** is a SQL-like descriptive language rather than an imperative one. It's compossible but not embeddable, it uses Shell pipe as an alternative, aka output in the former query acts as the input in the latter one. Key features of nGQL are as follows:

- Main algorithms are built in the query engine
- Primary algorithms are built in the query engine
- Duplicate queries can be avoided by supporting user-defined function (UDF)
- Programmable

The binary of the query engine is **nebula-graphd**. Each nebula-graphd instance is stateless and never talks to other nebula-graphd. nebula-graphd only talks to the storage service and the meta service. That makes it trivial to expand or shrink the query engine cluster.

The query engine accepts the message from the client and generates the execution plan after the lexical parsing (Lexer), semantic analysis (Parser) and the query optimization. Then the execution plan will be passed to the execution engine. The query execution engine takes the query plans and interacts with meta server and the storage engine to retrieve the schema and data.

The main optimizations of the query engine are:
The primary optimizations of the query engine are:

- Asynchronous and parallel execution

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ ttl_definition:

`ALTER EDGE` statement changes the structure of an edge. For example, you can add or delete properties, change the data type of an existing property. You can also set a property as TTL (Time-To-Live), or change the TTL duration.

**Note:** **Nebula Graph** automatically examines indexes when altering an edge. When altering an edge, **Nebula Graph** first checks whether the edge is associated with any indexes then traverses all of them to check whether the column item to be dropped or changed exists in the index column. If existed, the alter is rejected. Otherwise, it is allowed.
> **NOTE**: **Nebula Graph** automatically examines indexes when altering an edge. When altering an edge, **Nebula Graph** first checks whether the edge is associated with any indexes then traverses all of them to check whether the column item to be dropped or changed exists in the index column. If existed, the alter is rejected. Otherwise, it is allowed.

Please refer to [Index Documentation](index.md) on details about index.

Expand All @@ -31,4 +31,4 @@ nebula> ALTER EDGE e1 ADD (prop1 int, prop2 string), /* 添加 prop1 */
nebula> ALTER EDGE e1 TTL_DURATION = 2, TTL_COL = "prop1";
```

**Note:** `TTL_COL` only supports the properties whose values are of the `INT` or the `TIMESTAMP` type.
> **NOTE**: `TTL_COL` only supports the properties whose values are of the `INT` or the `TIMESTAMP` type.
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ ttl_definition:

`ALTER TAG` statement changes the structure of a tag. For example, you can add or delete properties, change the data type of an existing property. You can also set a property as TTL (Time-To-Live), or change the TTL duration.

**Note:** **Nebula Graph** automatically examines indexes when altering a tag. When altering a tag, **Nebula Graph** first checks whether the tag is associated with any indexes then traverses all of them to check whether the column item to be dropped or changed exists in the index column. If existed, the alter is rejected. Otherwise, it is allowed.
> **NOTE**: **Nebula Graph** automatically examines indexes when altering a tag. When altering a tag, **Nebula Graph** first checks whether the tag is associated with any indexes then traverses all of them to check whether the column item to be dropped or changed exists in the index column. If existed, the alter is rejected. Otherwise, it is allowed.

Please refer to [Index Documentation](index.md) on details about index.

Expand All @@ -28,4 +28,4 @@ nebula> ALTER TAG t1 ADD (id int, address string);
nebula> ALTER TAG t1 TTL_DURATION = 2, TTL_COL = "age";
```

**Note:** `TTL_COL` only supports the properties whose values are of the `INT` or the `TIMESTAMP` type.
> **NOTE**: `TTL_COL` only supports the properties whose values are of the `INT` or the `TIMESTAMP` type.
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The features of this syntax are described in the following sections:

You can use the `If NOT EXISTS` keywords when creating edge types. This keyword automatically detects if the corresponding edge type exists. If it does not exist, a new one is created. Otherwise, no edge type is created.

**Note:** The edge type existence detection here only compares the edge edge name (excluding properties).
> **NOTE**: The edge type existence detection here only compares the edge edge name (excluding properties).

## Edge Type Name

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This statement creates a new space with the given name. SPACE is a region that p

You can use the `If NOT EXISTS` keywords when creating spaces. This keyword automatically detects if the corresponding space exists. If it does not exist, a new one is created. Otherwise, no space is created.

**Note:** The space existence detection here only compares the space name (excluding properties).
> **NOTE**: The space existence detection here only compares the space name (excluding properties).

## Space Name

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The features of this syntax are described in the following sections:

You can use the `If NOT EXISTS` keywords when creating tags. This keyword automatically detects if the corresponding tag exists. If it does not exist, a new one is created. Otherwise, no tag is created.

**Note:** The tag existence detection here only compares the tag name (excluding properties).
> **NOTE**: The tag existence detection here only compares the tag name (excluding properties).

## Tag Name

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ DROP EDGE [IF EXISTS] <edge_type_name>

You must have the DROP privilege for the edge type.

**Note:** When dropping an edge, **Nebula Graph** only checks whether the edge is associated with any indexes. If so the deletion is rejected.
> **NOTE**: When dropping an edge, **Nebula Graph** only checks whether the edge is associated with any indexes. If so the deletion is rejected.

Please refer to [Index Documentation](index.md) on details about index.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,7 @@ DROP TAG [IF EXISTS] <tag_name>

You must have the DROP privilege for the tag.

> Be careful with this statement.

**Note:** When dropping a tag, **Nebula Graph** will only check whether the tag is associated with any indexes. If so the deletion is rejected.
> **NOTE**: Be careful with this statement. When dropping a tag, **Nebula Graph** will only check whether the tag is associated with any indexes. If so the deletion is rejected.

Please refer to [Index Documentation](index.md) on details about index.

Expand Down
Loading