From 2aaadda5d20e078fd6ee3de945b2c0ba6e44f15a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:00 -0500 Subject: [PATCH 001/432] New translations graphql-api.mdx (Spanish) --- pages/es/developer/graphql-api.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/es/developer/graphql-api.mdx b/pages/es/developer/graphql-api.mdx index 65928d8734e0..f9cb6214fcd9 100644 --- a/pages/es/developer/graphql-api.mdx +++ b/pages/es/developer/graphql-api.mdx @@ -204,12 +204,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples From 76fdc374c87d84650caf0a2049caaf2ad4d6a1c4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:01 -0500 Subject: [PATCH 002/432] New translations graphql-api.mdx (Arabic) --- pages/ar/developer/graphql-api.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/ar/developer/graphql-api.mdx b/pages/ar/developer/graphql-api.mdx index 65928d8734e0..f9cb6214fcd9 100644 --- a/pages/ar/developer/graphql-api.mdx +++ b/pages/ar/developer/graphql-api.mdx @@ -204,12 +204,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples From 983d02a798626e5e99251c0062ecb354340e39a2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:02 -0500 Subject: [PATCH 003/432] New translations graphql-api.mdx (Japanese) --- pages/ja/developer/graphql-api.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/ja/developer/graphql-api.mdx b/pages/ja/developer/graphql-api.mdx index 65928d8734e0..f9cb6214fcd9 100644 --- a/pages/ja/developer/graphql-api.mdx +++ b/pages/ja/developer/graphql-api.mdx @@ -204,12 +204,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples From 67a43ad2bcef6f6a83f89a5b5ab8940c4361627d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:02 -0500 Subject: [PATCH 004/432] New translations graphql-api.mdx (Korean) --- pages/ko/developer/graphql-api.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/ko/developer/graphql-api.mdx b/pages/ko/developer/graphql-api.mdx index 65928d8734e0..f9cb6214fcd9 100644 --- a/pages/ko/developer/graphql-api.mdx +++ b/pages/ko/developer/graphql-api.mdx @@ -204,12 +204,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples From 5197fa225e7211f796f07891d9361127ad9d259a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:05 -0500 Subject: [PATCH 005/432] New translations assemblyscript-api.mdx (Japanese) --- pages/ja/developer/assemblyscript-api.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/ja/developer/assemblyscript-api.mdx b/pages/ja/developer/assemblyscript-api.mdx index 1b8260e33971..a609e6cd657f 100644 --- a/pages/ja/developer/assemblyscript-api.mdx +++ b/pages/ja/developer/assemblyscript-api.mdx @@ -43,13 +43,13 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| Version | Release notes | -| :-: | --- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types From fda78c770eb43ea8a50d74208cfaa2a7a34fc0bb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:09 -0500 Subject: [PATCH 006/432] New translations assemblyscript-api.mdx (Spanish) --- pages/es/developer/assemblyscript-api.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/es/developer/assemblyscript-api.mdx b/pages/es/developer/assemblyscript-api.mdx index 1b8260e33971..a609e6cd657f 100644 --- a/pages/es/developer/assemblyscript-api.mdx +++ b/pages/es/developer/assemblyscript-api.mdx @@ -43,13 +43,13 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| Version | Release notes | -| :-: | --- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types From fe4d39dfbe15196c5b9397c5491130533f20e446 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:10 -0500 Subject: [PATCH 007/432] New translations assemblyscript-api.mdx (Arabic) --- pages/ar/developer/assemblyscript-api.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/ar/developer/assemblyscript-api.mdx b/pages/ar/developer/assemblyscript-api.mdx index 1b8260e33971..a609e6cd657f 100644 --- a/pages/ar/developer/assemblyscript-api.mdx +++ b/pages/ar/developer/assemblyscript-api.mdx @@ -43,13 +43,13 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| Version | Release notes | -| :-: | --- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types From 1a029e0386685984e075c5f573848132a3fb09f1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:11 -0500 Subject: [PATCH 008/432] New translations assemblyscript-api.mdx (Korean) --- pages/ko/developer/assemblyscript-api.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/ko/developer/assemblyscript-api.mdx b/pages/ko/developer/assemblyscript-api.mdx index 1b8260e33971..a609e6cd657f 100644 --- a/pages/ko/developer/assemblyscript-api.mdx +++ b/pages/ko/developer/assemblyscript-api.mdx @@ -43,13 +43,13 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| Version | Release notes | -| :-: | --- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types From 258f895cd70e01f227652e0eeb8332f44b90103b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:11 -0500 Subject: [PATCH 009/432] New translations create-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/create-subgraph-hosted.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/zh/developer/create-subgraph-hosted.mdx b/pages/zh/developer/create-subgraph-hosted.mdx index 3b05b2548456..6b235e379634 100644 --- a/pages/zh/developer/create-subgraph-hosted.mdx +++ b/pages/zh/developer/create-subgraph-hosted.mdx @@ -218,15 +218,15 @@ Each entity must have an `id` field, which is of type `ID!` (string). The `id` f We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums @@ -627,7 +627,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -684,7 +684,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. From a9dc08f4287ef6fdf2496ca8e243bc68038787cc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:12 -0500 Subject: [PATCH 010/432] New translations assemblyscript-api.mdx (Chinese Simplified) --- pages/zh/developer/assemblyscript-api.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/zh/developer/assemblyscript-api.mdx b/pages/zh/developer/assemblyscript-api.mdx index 1b8260e33971..a609e6cd657f 100644 --- a/pages/zh/developer/assemblyscript-api.mdx +++ b/pages/zh/developer/assemblyscript-api.mdx @@ -43,13 +43,13 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| Version | Release notes | -| :-: | --- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types From 7ff9462dbf62a495e009ceeedca7a2d9a14e0fdd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:13 -0500 Subject: [PATCH 011/432] New translations assemblyscript-migration-guide.mdx (Spanish) --- pages/es/developer/assemblyscript-migration-guide.mdx | 5 ----- 1 file changed, 5 deletions(-) diff --git a/pages/es/developer/assemblyscript-migration-guide.mdx b/pages/es/developer/assemblyscript-migration-guide.mdx index c63a1af95d7b..2db90a608110 100644 --- a/pages/es/developer/assemblyscript-migration-guide.mdx +++ b/pages/es/developer/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -285,7 +281,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript From 26305bd5a5f18bfeb012f497630af64555e3d0f4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:14 -0500 Subject: [PATCH 012/432] New translations assemblyscript-migration-guide.mdx (Arabic) --- pages/ar/developer/assemblyscript-migration-guide.mdx | 5 ----- 1 file changed, 5 deletions(-) diff --git a/pages/ar/developer/assemblyscript-migration-guide.mdx b/pages/ar/developer/assemblyscript-migration-guide.mdx index c63a1af95d7b..2db90a608110 100644 --- a/pages/ar/developer/assemblyscript-migration-guide.mdx +++ b/pages/ar/developer/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -285,7 +281,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript From deddfe43d2e8f78a93a17c04ee9df827a2f98998 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:14 -0500 Subject: [PATCH 013/432] New translations assemblyscript-migration-guide.mdx (Japanese) --- pages/ja/developer/assemblyscript-migration-guide.mdx | 5 ----- 1 file changed, 5 deletions(-) diff --git a/pages/ja/developer/assemblyscript-migration-guide.mdx b/pages/ja/developer/assemblyscript-migration-guide.mdx index c63a1af95d7b..2db90a608110 100644 --- a/pages/ja/developer/assemblyscript-migration-guide.mdx +++ b/pages/ja/developer/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -285,7 +281,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript From a20e14efbaf5c527c619bfe35a56d63d92e45ae7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:15 -0500 Subject: [PATCH 014/432] New translations assemblyscript-migration-guide.mdx (Korean) --- pages/ko/developer/assemblyscript-migration-guide.mdx | 5 ----- 1 file changed, 5 deletions(-) diff --git a/pages/ko/developer/assemblyscript-migration-guide.mdx b/pages/ko/developer/assemblyscript-migration-guide.mdx index c63a1af95d7b..2db90a608110 100644 --- a/pages/ko/developer/assemblyscript-migration-guide.mdx +++ b/pages/ko/developer/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -285,7 +281,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript From 245f630286f95a4c6b475ef3a9587e72dd6d3529 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:16 -0500 Subject: [PATCH 015/432] New translations assemblyscript-migration-guide.mdx (Chinese Simplified) --- pages/zh/developer/assemblyscript-migration-guide.mdx | 5 ----- 1 file changed, 5 deletions(-) diff --git a/pages/zh/developer/assemblyscript-migration-guide.mdx b/pages/zh/developer/assemblyscript-migration-guide.mdx index c63a1af95d7b..2db90a608110 100644 --- a/pages/zh/developer/assemblyscript-migration-guide.mdx +++ b/pages/zh/developer/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -285,7 +281,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript From 0e1dc1e7721ae9b6ad2948863ffa48e1236b1e43 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:17 -0500 Subject: [PATCH 016/432] New translations create-subgraph-hosted.mdx (Spanish) --- pages/es/developer/create-subgraph-hosted.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/es/developer/create-subgraph-hosted.mdx b/pages/es/developer/create-subgraph-hosted.mdx index 3b05b2548456..6b235e379634 100644 --- a/pages/es/developer/create-subgraph-hosted.mdx +++ b/pages/es/developer/create-subgraph-hosted.mdx @@ -218,15 +218,15 @@ Each entity must have an `id` field, which is of type `ID!` (string). The `id` f We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums @@ -627,7 +627,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -684,7 +684,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. From 254b773d2fae60edb94495acd7d78674f1db11cd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:18 -0500 Subject: [PATCH 017/432] New translations create-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/create-subgraph-hosted.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index 3b05b2548456..6b235e379634 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -218,15 +218,15 @@ Each entity must have an `id` field, which is of type `ID!` (string). The `id` f We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums @@ -627,7 +627,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -684,7 +684,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. From e0a2d9193967c74703bf5ca489ce16517e5b9571 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:19 -0500 Subject: [PATCH 018/432] New translations create-subgraph-hosted.mdx (Japanese) --- pages/ja/developer/create-subgraph-hosted.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ja/developer/create-subgraph-hosted.mdx b/pages/ja/developer/create-subgraph-hosted.mdx index 3b05b2548456..6b235e379634 100644 --- a/pages/ja/developer/create-subgraph-hosted.mdx +++ b/pages/ja/developer/create-subgraph-hosted.mdx @@ -218,15 +218,15 @@ Each entity must have an `id` field, which is of type `ID!` (string). The `id` f We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums @@ -627,7 +627,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -684,7 +684,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. From a1c9e4b324e6ef22593389b8860535dd6bfe1a3b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:19 -0500 Subject: [PATCH 019/432] New translations create-subgraph-hosted.mdx (Korean) --- pages/ko/developer/create-subgraph-hosted.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ko/developer/create-subgraph-hosted.mdx b/pages/ko/developer/create-subgraph-hosted.mdx index 3b05b2548456..6b235e379634 100644 --- a/pages/ko/developer/create-subgraph-hosted.mdx +++ b/pages/ko/developer/create-subgraph-hosted.mdx @@ -218,15 +218,15 @@ Each entity must have an `id` field, which is of type `ID!` (string). The `id` f We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums @@ -627,7 +627,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -684,7 +684,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. From 07df4bc1f2e9ffa90e49f866a130530f8dc5cd65 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:20 -0500 Subject: [PATCH 020/432] New translations graphql-api.mdx (Chinese Simplified) --- pages/zh/developer/graphql-api.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/zh/developer/graphql-api.mdx b/pages/zh/developer/graphql-api.mdx index 65928d8734e0..f9cb6214fcd9 100644 --- a/pages/zh/developer/graphql-api.mdx +++ b/pages/zh/developer/graphql-api.mdx @@ -204,12 +204,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples From 0e9c0af38cd0f80b0256476abdc5e84ee0fb1a93 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:23 -0500 Subject: [PATCH 021/432] New translations migrating-subgraph.mdx (Spanish) --- pages/es/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/hosted-service/migrating-subgraph.mdx b/pages/es/hosted-service/migrating-subgraph.mdx index d515d7b7a5b5..eda54d1931ed 100644 --- a/pages/es/hosted-service/migrating-subgraph.mdx +++ b/pages/es/hosted-service/migrating-subgraph.mdx @@ -142,7 +142,7 @@ If you're still confused, fear not! Check out the following resources or watch o title="Reproductor de video de YouTube" frameBorder="0" allowFullScreen - > +> - [The Graph Network Contracts](https://github.com/graphprotocol/contracts) From d53ac01aca04f81660a9476ac346713d8997da45 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:23 -0500 Subject: [PATCH 022/432] New translations migrating-subgraph.mdx (Arabic) --- pages/ar/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/hosted-service/migrating-subgraph.mdx b/pages/ar/hosted-service/migrating-subgraph.mdx index 2fc480afa395..e6abe9dd1895 100644 --- a/pages/ar/hosted-service/migrating-subgraph.mdx +++ b/pages/ar/hosted-service/migrating-subgraph.mdx @@ -142,7 +142,7 @@ If you're still confused, fear not! Check out the following resources or watch o title="مشغل فيديو يوتيوب" frameBorder="0" allowFullScreen - > +> - [The Graph Network Contracts](https://github.com/graphprotocol/contracts) From bc7d88d146b663adb3d476f476f6ff6e83eb6f21 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:24 -0500 Subject: [PATCH 023/432] New translations migrating-subgraph.mdx (Japanese) --- pages/ja/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/hosted-service/migrating-subgraph.mdx b/pages/ja/hosted-service/migrating-subgraph.mdx index dc4d99ca4108..5ac96ad86eec 100644 --- a/pages/ja/hosted-service/migrating-subgraph.mdx +++ b/pages/ja/hosted-service/migrating-subgraph.mdx @@ -142,7 +142,7 @@ If you're still confused, fear not! Check out the following resources or watch o title="YouTube ビデオプレイヤー" frameBorder="0" allowFullScreen - > +> - [The Graph Network Contracts](https://github.com/graphprotocol/contracts) From ab9438c912434cf9fb2e93b1b54441b232861552 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:26 -0500 Subject: [PATCH 024/432] New translations migrating-subgraph.mdx (Korean) --- pages/ko/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/hosted-service/migrating-subgraph.mdx b/pages/ko/hosted-service/migrating-subgraph.mdx index 3456b4f166e4..2a184afcb50e 100644 --- a/pages/ko/hosted-service/migrating-subgraph.mdx +++ b/pages/ko/hosted-service/migrating-subgraph.mdx @@ -142,7 +142,7 @@ If you're still confused, fear not! Check out the following resources or watch o title="YouTubeビデオプレーヤー" frameBorder="0" allowFullScreen - > +> - [The Graph Network Contracts](https://github.com/graphprotocol/contracts) From f786b5c0899b8e0baeac878628b22969048cab27 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:28 -0500 Subject: [PATCH 025/432] New translations migrating-subgraph.mdx (Chinese Simplified) --- pages/zh/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/hosted-service/migrating-subgraph.mdx b/pages/zh/hosted-service/migrating-subgraph.mdx index 9d3c26f43caa..85f72f053b30 100644 --- a/pages/zh/hosted-service/migrating-subgraph.mdx +++ b/pages/zh/hosted-service/migrating-subgraph.mdx @@ -142,7 +142,7 @@ If you're still confused, fear not! Check out the following resources or watch o title="YouTube video player" frameBorder="0" allowFullScreen - > +> - [The Graph Network Contracts](https://github.com/graphprotocol/contracts) From 8f26b2cb9f81d146384c99e783d78babe7a902cf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:39 -0500 Subject: [PATCH 026/432] New translations billing.mdx (Spanish) --- pages/es/studio/billing.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/studio/billing.mdx b/pages/es/studio/billing.mdx index fec638ae5c9f..9a9d4593cced 100644 --- a/pages/es/studio/billing.mdx +++ b/pages/es/studio/billing.mdx @@ -46,7 +46,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide title="Reproductor de video de YouTube" frameBorder="0" allowFullScreen - > +> ### Multisig Users From 2da5f7ccd6b23683b11d1210212ca3a1cddf60fc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:40 -0500 Subject: [PATCH 027/432] New translations billing.mdx (Arabic) --- pages/ar/studio/billing.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/studio/billing.mdx b/pages/ar/studio/billing.mdx index 4e2e3d0c945e..739b216bc86e 100644 --- a/pages/ar/studio/billing.mdx +++ b/pages/ar/studio/billing.mdx @@ -46,7 +46,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide title="مشغل فيديو يوتيوب" frameBorder="0" allowFullScreen - > +> ### Multisig Users From 78423407c6bcb8f7acb45ca7302360f3375687b1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:41 -0500 Subject: [PATCH 028/432] New translations billing.mdx (Japanese) --- pages/ja/studio/billing.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/studio/billing.mdx b/pages/ja/studio/billing.mdx index 2ec869670090..f7b6ef7e4765 100644 --- a/pages/ja/studio/billing.mdx +++ b/pages/ja/studio/billing.mdx @@ -46,7 +46,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide title="YouTube ビデオプレイヤー" frameBorder="0" allowFullScreen - > +> ### Multisig Users From 2d3b2293096d46ee6f1c97219facf0de0fb326b6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:42 -0500 Subject: [PATCH 029/432] New translations billing.mdx (Korean) --- pages/ko/studio/billing.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/studio/billing.mdx b/pages/ko/studio/billing.mdx index e909da9cd1e1..a88e6c4ab236 100644 --- a/pages/ko/studio/billing.mdx +++ b/pages/ko/studio/billing.mdx @@ -46,7 +46,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide title="YouTubeビデオプレーヤー" frameBorder="0" allowFullScreen - > +> ### Multisig Users From 256ce2e55114a985ccdb7fe416471beb2a8f30fd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:56 -0500 Subject: [PATCH 030/432] New translations billing.mdx (Chinese Simplified) --- pages/zh/studio/billing.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/studio/billing.mdx b/pages/zh/studio/billing.mdx index c29dc6b4454e..588cd2ed2f40 100644 --- a/pages/zh/studio/billing.mdx +++ b/pages/zh/studio/billing.mdx @@ -46,7 +46,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide title="YouTube video player" frameBorder="0" allowFullScreen - > +> ### Multisig Users From d2e5f23a0f9a3565cf84cb355ec1160966457c5b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:57 -0500 Subject: [PATCH 031/432] New translations delegating.mdx (Korean) --- pages/ko/delegating.mdx | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/pages/ko/delegating.mdx b/pages/ko/delegating.mdx index 22b1e8c5bfd4..20cd2496fa04 100644 --- a/pages/ko/delegating.mdx +++ b/pages/ko/delegating.mdx @@ -28,7 +28,9 @@ This guide will explain how to be an effective delegator in the Graph Network. D 또한 고려해야 할 한 가지는 위임을 위한 인덱서를 현명하게 선택하는 것입니다. 만약 여러분들이 신뢰할 수 없거나 작업을 제대로 수행하지 않는 인덱서를 선택하면 여러분들은 해당 위임의 취소를 원할 것입니다. 이 경우, 보상을 받는 기회를 잃음과 더불어, 단지 여러분의 GRT를 소각하기만 한 결과를 초래할 것입니다. -
위임 UI에는 0.5%의 수수료 및 28일의 위임 해지 기간이 명시되어있습니다.
+
+ 위임 UI에는 0.5%의 수수료 및 28일의 위임 해지 기간이 명시되어있습니다. +
### 위임자들에 대한 공정한 보상 지급 규칙을 지닌 신뢰할 수 있는 인덱서 선택 @@ -87,5 +89,5 @@ A delegator can therefore do the math to determine that the Indexer offering 20% title="YouTubeビデオプレーヤー" frameBorder="0" allowFullScreen - > +> From a4c5b5107da82c0bab866bd39f3ff9a94e89d64c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:58 -0500 Subject: [PATCH 032/432] New translations curating.mdx (Spanish) --- pages/es/curating.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/curating.mdx b/pages/es/curating.mdx index 0b50efa83cc4..af50b41b1f19 100644 --- a/pages/es/curating.mdx +++ b/pages/es/curating.mdx @@ -100,5 +100,5 @@ Still confused? Still confused? Check out our Curation video guide below: title="Reproductor de video de YouTube" frameBorder="0" allowFullScreen - > +> From 3103459f2e334364c04c97b6a1d04f944c8aa55c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:59 -0500 Subject: [PATCH 033/432] New translations curating.mdx (Arabic) --- pages/ar/curating.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/curating.mdx b/pages/ar/curating.mdx index b13e20525f2f..6e37a8776a6f 100644 --- a/pages/ar/curating.mdx +++ b/pages/ar/curating.mdx @@ -100,5 +100,5 @@ title: (التنسيق) curating title="مشغل فيديو يوتيوب" frameBorder="0" allowFullScreen - > +> From 5310b236d8da419f5b4fc1f6603b6ea919106219 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:04:59 -0500 Subject: [PATCH 034/432] New translations curating.mdx (Japanese) --- pages/ja/curating.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/curating.mdx b/pages/ja/curating.mdx index 6b795800339d..1748d663907a 100644 --- a/pages/ja/curating.mdx +++ b/pages/ja/curating.mdx @@ -100,5 +100,5 @@ Still confused? その他の不明点に関しては、 以下のキュレーシ title="YouTubeビデオプレーヤー" frameBorder="0" allowFullScreen - > +> From e4f6b5461b6ee27b412b17dfe75efeb8a6596c25 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:01 -0500 Subject: [PATCH 035/432] New translations curating.mdx (Korean) --- pages/ko/curating.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/curating.mdx b/pages/ko/curating.mdx index dcb329288947..456deec666f7 100644 --- a/pages/ko/curating.mdx +++ b/pages/ko/curating.mdx @@ -100,5 +100,5 @@ title: 큐레이팅 title="YouTubeビデオプレーヤー" frameBorder="0" allowFullScreen - > +> From efce23d856d379f359cf403d48e014f5a53a952c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:05 -0500 Subject: [PATCH 036/432] New translations curating.mdx (Chinese Simplified) --- pages/zh/curating.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/curating.mdx b/pages/zh/curating.mdx index cc29f02f941c..f56dd059938c 100644 --- a/pages/zh/curating.mdx +++ b/pages/zh/curating.mdx @@ -100,5 +100,5 @@ Remember that curation is risky. 请做好你的工作,确保你在你信任 title="YouTube video player" frameBorder="0" allowFullScreen - > +> From 780b1659a6b41ad55aca75ec2f3f81ce6019742d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:08 -0500 Subject: [PATCH 037/432] New translations delegating.mdx (Spanish) --- pages/es/delegating.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/delegating.mdx b/pages/es/delegating.mdx index e92e8cb8a7d0..e4fd24e9169d 100644 --- a/pages/es/delegating.mdx +++ b/pages/es/delegating.mdx @@ -90,5 +90,5 @@ Utilizando está formula, podemos discernir qué un Indexer el cual está ofreci title="Reproductor de video de YouTube" frameBorder="0" allowFullScreen - > +> From efc3ebb00ad1370d9b29d087b950d169a2826628 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:09 -0500 Subject: [PATCH 038/432] New translations delegating.mdx (Arabic) --- pages/ar/delegating.mdx | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/pages/ar/delegating.mdx b/pages/ar/delegating.mdx index 3a0d55f8363d..207be3e2a948 100644 --- a/pages/ar/delegating.mdx +++ b/pages/ar/delegating.mdx @@ -28,7 +28,9 @@ This guide will explain how to be an effective delegator in the Graph Network. D يجب اختيار المفهرس بحكمة. إذا اخترت مفهرسا ليس جديرا بالثقة ، أو لا يقوم بعمل جيد ، فستحتاج إلى إلغاء التفويض ، مما يعني أنك ستفقد الكثير من الفرص لكسب المكافآت والتي يمكن أن تكون سيئة مثل حرق GRT. -
لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض.
+
+ لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض. +
### اختيار مفهرس جدير بالثقة مع عائد جيد للمفوضين @@ -86,5 +88,5 @@ Using this formula, we can see that it is actually possible for an indexer who i title="مشغل فيديو يوتيوب" frameBorder="0" allowFullScreen - > +> From 3157578e7c7a32528ccaa2f46909de0c5c562944 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:10 -0500 Subject: [PATCH 039/432] New translations delegating.mdx (Japanese) --- pages/ja/delegating.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ja/delegating.mdx b/pages/ja/delegating.mdx index a9facfe51595..a184aefd3b63 100644 --- a/pages/ja/delegating.mdx +++ b/pages/ja/delegating.mdx @@ -39,8 +39,8 @@ This guide will explain how to be an effective delegator in the Graph Network. D インデキシング報酬カット - インデキシング報酬カットは、インデクサーが自分のために保持する報酬の部分です。 つまり、これが 100%に設定されていると、デリゲーターであるあなたは 0 のインデキシング報酬を得ることになります。 UI に 80%と表示されている場合は、デリゲーターとして 20%を受け取ることになります。 重要な注意点として、ネットワークの初期段階では、インデキシング報酬が報酬の大半を占めます。
- トップのインデクサーは、デリゲーターに90%の報酬を与えています。 The middle one is giving delegators 20%. The bottom - one is giving delegators ~83%.* + トップのインデクサーは、デリゲーターに90%の報酬を与えています。 The + middle one is giving delegators 20%. The bottom one is giving delegators ~83%.*
- クエリーフィーカット - これはインデキシングリワードカットと全く同じ働きをします。 しかし、これは特に、インデクサーが収集したクエリフィーに対するリターンを対象としています。 ネットワークの初期段階では、クエリフィーからのリターンは、インデキシング報酬に比べて非常に小さいことに注意する必要があります。 ネットワーク内のクエリフィーがいつから大きくなり始めるのかを判断するために、ネットワークに注意を払うことをお勧めします。 @@ -89,5 +89,5 @@ A delegator can therefore do the math to determine that the Indexer offering 20% title="YouTubeビデオプレーヤー" frameBorder="0" allowFullScreen - > +> From cd630b63ce06ec84bb0566e31332ba3f6deaaefb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:10 -0500 Subject: [PATCH 040/432] New translations delegating.mdx (Chinese Simplified) --- pages/zh/delegating.mdx | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/pages/zh/delegating.mdx b/pages/zh/delegating.mdx index 903363025da7..4702217d5711 100644 --- a/pages/zh/delegating.mdx +++ b/pages/zh/delegating.mdx @@ -28,7 +28,9 @@ Listed below are the main risks of being a delegator in the protocol. 还需要考虑的一件事是明智地选择索引人。 如果您选择了一个不值得信赖的 索引人,或者没有做好工作,您将想要取消委托,这意味着您将失去很多获得奖励的机会,这可能与燃烧 GRT 一样糟糕。 -
请注意委托用户界面中的0.5%费用,以及28天的解约期。
+
+ 请注意委托用户界面中的0.5%费用,以及28天的解约期。 +
### 选择一个为委托人提供公平的奖励分配的值得信赖的索引人 @@ -86,5 +88,5 @@ A delegator can therefore do the math to determine that the Indexer offering 20% title="YouTube video player" frameBorder="0" allowFullScreen - > +> From bf39816f948b95cd620fe4bdd5913470f2ea5b16 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:13 -0500 Subject: [PATCH 041/432] New translations explorer.mdx (Spanish) --- pages/es/explorer.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/explorer.mdx b/pages/es/explorer.mdx index 72bd427987c2..eee9dce5ade9 100644 --- a/pages/es/explorer.mdx +++ b/pages/es/explorer.mdx @@ -11,7 +11,7 @@ Bienvenido al explorador de The Graph, o como nos gusta llamarlo, tu portal desc title="Reproductor de video de YouTube" frameBorder="0" allowFullScreen - > +> ## Subgrafos From cff4cd134152560ac06460a9d7983bbfee2db620 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:14 -0500 Subject: [PATCH 042/432] New translations explorer.mdx (Arabic) --- pages/ar/explorer.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/explorer.mdx b/pages/ar/explorer.mdx index c12ecf1ce809..ae31b016d8a4 100644 --- a/pages/ar/explorer.mdx +++ b/pages/ar/explorer.mdx @@ -11,7 +11,7 @@ title: مستكشف title="مشغل فيديو يوتيوب" frameBorder="0" allowFullScreen - > +> ## Subgraphs From 84a76ff99d22c9cea5c1fbbfba895f7c7646f584 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:15 -0500 Subject: [PATCH 043/432] New translations explorer.mdx (Japanese) --- pages/ja/explorer.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/explorer.mdx b/pages/ja/explorer.mdx index 1d97785791c9..cbe4fa6e03c0 100644 --- a/pages/ja/explorer.mdx +++ b/pages/ja/explorer.mdx @@ -11,7 +11,7 @@ Welcome to the Graph Explorer, or as we like to call it, your decentralized port title="YouTube ビデオプレイヤー" frameBorder="0" allowFullScreen - > +> ## サブグラフ From f401f7bcf7a3ed83174a21be2b1a4e2ecb569220 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:16 -0500 Subject: [PATCH 044/432] New translations explorer.mdx (Korean) --- pages/ko/explorer.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/explorer.mdx b/pages/ko/explorer.mdx index 7132fa01731d..816139ae9a58 100644 --- a/pages/ko/explorer.mdx +++ b/pages/ko/explorer.mdx @@ -11,7 +11,7 @@ title: 탐색기 title="YouTubeビデオプレーヤー" frameBorder="0" allowFullScreen - > +> ## 서브그래프 From 477a24bed2c36963deeab764accdcfb4414afef4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:17 -0500 Subject: [PATCH 045/432] New translations explorer.mdx (Chinese Simplified) --- pages/zh/explorer.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/explorer.mdx b/pages/zh/explorer.mdx index 7e68053f4b44..c9a0635fdc75 100644 --- a/pages/zh/explorer.mdx +++ b/pages/zh/explorer.mdx @@ -11,7 +11,7 @@ title: 浏览器 title="YouTube video player" frameBorder="0" allowFullScreen - > +> ## 子图 From b9adafbc89fbadaa2ce63ddcaaf3f79551fc8925 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:18 -0500 Subject: [PATCH 046/432] New translations indexing.mdx (Spanish) --- pages/es/indexing.mdx | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/es/indexing.mdx b/pages/es/indexing.mdx index 241841b2e01a..5c0c95375e96 100644 --- a/pages/es/indexing.mdx +++ b/pages/es/indexing.mdx @@ -115,7 +115,7 @@ Los indexadores pueden diferenciarse aplicando técnicas avanzadas para tomar de - **Grande**: Preparado para indexar todos los subgrafos utilizados actualmente y atender solicitudes para el tráfico relacionado. | Configuración | (CPUs) | (memoria en GB) | (disco en TB) | (CPUs) | (memoria en GB) | -| ------------- | :----: | :-------------: | :-----------: | :----: | :-------------: | +| ------------- |:------:|:---------------:|:-------------:|:------:|:---------------:| | Pequeño | 4 | 8 | 1 | 4 | 16 | | Estándar | 8 | 30 | 1 | 12 | 48 | | Medio | 16 | 64 | 2 | 32 | 64 | @@ -149,24 +149,24 @@ Nota: Para admitir el escalado ágil, se recomienda que las inquietudes de consu #### Graph Node -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| --- | --- | --- | --- | --- | -| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | -| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | -| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| ------ | ---------------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | ------------------- | +| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | +| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | +| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | #### Servicio de Indexador -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| --- | --- | --- | --- | --- | -| 7600 | Servidor HTTP GraphQL
(para consultas de subgrafo pagadas) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| ------ | ----------------------------------------------------------------------- | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | Servidor HTTP GraphQL
(para consultas de subgrafo pagadas) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Métricas de Prometheus | /metrics | --metrics-port | - | #### Agente Indexador -| Puerto | Objeto | Rutas | Argumento CLI | Variable de
Entorno | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de
Entorno | | ------ | ----------------------------- | ----- | ------------------------- | --------------------------------------- | | 8000 | API de gestión de indexadores | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | From 60a80438a4cae9e98044ae43e36802cf83171498 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:19 -0500 Subject: [PATCH 047/432] New translations indexing.mdx (Arabic) --- pages/ar/indexing.mdx | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/pages/ar/indexing.mdx b/pages/ar/indexing.mdx index 3c05105e8ce0..23625e978032 100644 --- a/pages/ar/indexing.mdx +++ b/pages/ar/indexing.mdx @@ -115,7 +115,7 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute - **كبيرة** - مُعدة لفهرسة جميع ال subgraphs المستخدمة حاليا وأيضا لخدمة طلبات حركة المرور البيانات ذات الصلة. | Setup | (CPUs) | (memory in GB) | (disk in TBs) | (CPUs) | (memory in GB) | -| ----- | :----: | :------------: | :-----------: | :----: | :------------: | +| ----- |:------:|:--------------:|:-------------:|:------:|:--------------:| | صغير | 4 | 8 | 1 | 4 | 16 | | قياسي | 8 | 30 | 1 | 12 | 48 | | متوسط | 16 | 64 | 2 | 32 | 64 | @@ -149,20 +149,20 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### خدمة المفهرس -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent From 4a458978951ba5486506f125dab34cd770264306 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:21 -0500 Subject: [PATCH 048/432] New translations indexing.mdx (Japanese) --- pages/ja/indexing.mdx | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/pages/ja/indexing.mdx b/pages/ja/indexing.mdx index 8a823e93b331..c6c9dd55cc6d 100644 --- a/pages/ja/indexing.mdx +++ b/pages/ja/indexing.mdx @@ -115,7 +115,7 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute - **Large** - 現在使用されているすべてのサブグラフのインデックスを作成し、関連するトラフィックのリクエストに対応します | Setup | (CPUs) | (memory in GB) | (disk in TBs) | (CPUs) | (memory in GB) | -| -------- | :----: | :------------: | :-----------: | :----: | :------------: | +| -------- |:------:|:--------------:|:-------------:|:------:|:--------------:| | Small | 4 | 8 | 1 | 4 | 16 | | Standard | 8 | 30 | 1 | 12 | 48 | | Medium | 16 | 64 | 2 | 32 | 64 | @@ -149,20 +149,20 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute #### グラフノード -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent From 75882e7def065e8e8b215c4187f47119bf9f03f5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:22 -0500 Subject: [PATCH 049/432] New translations indexing.mdx (Korean) --- pages/ko/indexing.mdx | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/pages/ko/indexing.mdx b/pages/ko/indexing.mdx index 1f6508d99ced..b9100d86a58b 100644 --- a/pages/ko/indexing.mdx +++ b/pages/ko/indexing.mdx @@ -115,7 +115,7 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute - **Large** - 현재 사용되는 모든 서브그래프들 및 관련 트레픽 요청의 처리에 대한 요건을 충족합니다. | Setup | (CPUs) | (memory in GB) | (disk in TBs) | (CPUs) | (memory in GB) | -| -------- | :----: | :------------: | :-----------: | :----: | :------------: | +| -------- |:------:|:--------------:|:-------------:|:------:|:--------------:| | Small | 4 | 8 | 1 | 4 | 16 | | Standard | 8 | 30 | 1 | 12 | 48 | | Medium | 16 | 64 | 2 | 32 | 64 | @@ -149,20 +149,20 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute #### 그래프 노드 -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent From 4873b28bca2f1411dd98dadbe5b395747d3a1035 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:30 -0500 Subject: [PATCH 050/432] New translations subgraph-studio.mdx (Spanish) --- pages/es/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/studio/subgraph-studio.mdx b/pages/es/studio/subgraph-studio.mdx index 36118547baee..651fa1c96e67 100644 --- a/pages/es/studio/subgraph-studio.mdx +++ b/pages/es/studio/subgraph-studio.mdx @@ -73,7 +73,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF title="Reproductor de video de YouTube" frameBorder="0" allowFullScreen - > +> Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. From 75d28f64de0c6e74fb7aa09d053837e3756a4304 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:31 -0500 Subject: [PATCH 051/432] New translations subgraph-studio.mdx (Arabic) --- pages/ar/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/studio/subgraph-studio.mdx b/pages/ar/studio/subgraph-studio.mdx index 0dfc28807462..60e01374fc04 100644 --- a/pages/ar/studio/subgraph-studio.mdx +++ b/pages/ar/studio/subgraph-studio.mdx @@ -73,7 +73,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF title="مشغل فيديو يوتيوب" frameBorder="0" allowFullScreen - > +> Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. From fc3c12a6df29fcb7686677ece656800cea4306a4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:32 -0500 Subject: [PATCH 052/432] New translations subgraph-studio.mdx (Japanese) --- pages/ja/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/studio/subgraph-studio.mdx b/pages/ja/studio/subgraph-studio.mdx index 5e2cf18ce86e..1f5ecf6a7011 100644 --- a/pages/ja/studio/subgraph-studio.mdx +++ b/pages/ja/studio/subgraph-studio.mdx @@ -73,7 +73,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF title="YouTube ビデオプレイヤー" frameBorder="0" allowFullScreen - > +> Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. From b8685db0097e8e3868c1d20f2bad4f38c86428a2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:32 -0500 Subject: [PATCH 053/432] New translations subgraph-studio.mdx (Korean) --- pages/ko/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/studio/subgraph-studio.mdx b/pages/ko/studio/subgraph-studio.mdx index 492bb376c44b..562d588ef26d 100644 --- a/pages/ko/studio/subgraph-studio.mdx +++ b/pages/ko/studio/subgraph-studio.mdx @@ -73,7 +73,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF title="YouTubeビデオプレーヤー" frameBorder="0" allowFullScreen - > +> Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. From 8ceb0279e4c49e60b7ec58b2516456e2286ff288 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:33 -0500 Subject: [PATCH 054/432] New translations subgraph-studio.mdx (Chinese Simplified) --- pages/zh/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/studio/subgraph-studio.mdx b/pages/zh/studio/subgraph-studio.mdx index e01de9f15305..9af3926db3df 100644 --- a/pages/zh/studio/subgraph-studio.mdx +++ b/pages/zh/studio/subgraph-studio.mdx @@ -73,7 +73,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF title="YouTube video player" frameBorder="0" allowFullScreen - > +> Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. From 875a7020e8fd31f584f897521247cfb1847ee613 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:05:34 -0500 Subject: [PATCH 055/432] New translations indexing.mdx (Chinese Simplified) --- pages/zh/indexing.mdx | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/pages/zh/indexing.mdx b/pages/zh/indexing.mdx index a43cd6077f88..f4c2f6f49ef4 100644 --- a/pages/zh/indexing.mdx +++ b/pages/zh/indexing.mdx @@ -115,11 +115,11 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute - **大型** -准备对当前使用的所有子图进行索引,并为相关流量的请求提供服务 | Setup | (CPU 数量) | (内存 GB) | (硬盘 TB) | (CPU 数量) | (内存 GB) | -| ----- | :--------: | :-------: | :-------: | :--------: | :-------: | -| 小型 | 4 | 8 | 1 | 4 | 16 | -| 标准 | 8 | 30 | 1 | 12 | 48 | -| 中型 | 16 | 64 | 2 | 32 | 64 | -| 大型 | 72 | 468 | 3.5 | 48 | 184 | +| ----- |:--------:|:-------:|:-------:|:--------:|:-------:| +| 小型 | 4 | 8 | 1 | 4 | 16 | +| 标准 | 8 | 30 | 1 | 12 | 48 | +| 中型 | 16 | 64 | 2 | 32 | 64 | +| 大型 | 72 | 468 | 3.5 | 48 | 184 | ### 索引人应该采取哪些基本的安全防范措施? @@ -149,26 +149,26 @@ At the center of an indexer's infrastructure is the Graph Node which monitors Et #### Graph 节点 -| 端口 | Purpose | 路径 | CLI Argument | 环境 变量 | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP 服务
(用于子图查询) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(用于子图订阅) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(用于管理部署) | / | --admin-port | - | -| 8030 | 子图索引状态 API | /graphql | --index-node-port | - | -| 8040 | Prometheus 指标 | /metrics | --metrics-port | - | +| 端口 | Purpose | 路径 | CLI Argument | 环境 变量 | +| ---- | ------------------------------------ | ------------------------------------------------------------------- | ----------------- | ----- | +| 8000 | GraphQL HTTP 服务
(用于子图查询) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(用于子图订阅) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(用于管理部署) | / | --admin-port | - | +| 8030 | 子图索引状态 API | /graphql | --index-node-port | - | +| 8040 | Prometheus 指标 | /metrics | --metrics-port | - | #### 索引人服务 -| 端口 | Purpose | 路径 | CLI Argument | 环境 变量 | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP 服务
(用于付费子图查询) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus 指标 | /metrics | --metrics-port | - | +| 端口 | Purpose | 路径 | CLI Argument | 环境 变量 | +| ---- | -------------------------------------- | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP 服务
(用于付费子图查询) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus 指标 | /metrics | --metrics-port | - | #### 索引人代理 -| 端口 | Purpose | 路径 | CLI Argument | 环境
变量 | -| ---- | -------------- | ---- | ------------------------- | --------------------------------------- | -| 8000 | 索引人管理 API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| 端口 | Purpose | 路径 | CLI Argument | 环境
变量 | +| ---- | --------- | -- | ------------------------- | --------------------------------------- | +| 8000 | 索引人管理 API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | ### Google Cloud 上使用 Terraform 建立基础架构 @@ -659,7 +659,7 @@ default => 0.1 * $SYSTEM_LOAD; 成本模型示例: -| 询问 | 价格 | +| 询问 | 价格 | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | From ba1daa6f85a1b37bea06dfd0a68f46d0480ba17b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:19 -0500 Subject: [PATCH 056/432] New translations define-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/define-subgraph-hosted.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/developer/define-subgraph-hosted.mdx b/pages/zh/developer/define-subgraph-hosted.mdx index 92bf5bd8cd2f..6006b117aa62 100644 --- a/pages/zh/developer/define-subgraph-hosted.mdx +++ b/pages/zh/developer/define-subgraph-hosted.mdx @@ -2,7 +2,7 @@ title: Define a Subgraph --- -A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. +A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. Once deployed, it will form a part of a global graph of blockchain data. ![Define a Subgraph](/img/define-subgraph.png) From 2c7fa2617cb7014b3d414b9b87de8047e919885a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:20 -0500 Subject: [PATCH 057/432] New translations deprecating-a-subgraph.mdx (Spanish) --- pages/es/developer/deprecating-a-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/developer/deprecating-a-subgraph.mdx b/pages/es/developer/deprecating-a-subgraph.mdx index f8966e025c13..1206e2a6f622 100644 --- a/pages/es/developer/deprecating-a-subgraph.mdx +++ b/pages/es/developer/deprecating-a-subgraph.mdx @@ -2,13 +2,13 @@ title: Deprecating a Subgraph --- -So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: +So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: Follow the steps below: 1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) 2. Call 'deprecateSubgraph' with your own address as the first parameter 3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. 4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` -5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: +5. Voila! Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: - Curators will not be able to signal on the subgraph anymore - Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price From 0b87ec70ef58184ad4af53b00679549d8552803f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:22 -0500 Subject: [PATCH 058/432] New translations deprecating-a-subgraph.mdx (Chinese Simplified) --- pages/zh/developer/deprecating-a-subgraph.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/zh/developer/deprecating-a-subgraph.mdx b/pages/zh/developer/deprecating-a-subgraph.mdx index f8966e025c13..726461b4c46c 100644 --- a/pages/zh/developer/deprecating-a-subgraph.mdx +++ b/pages/zh/developer/deprecating-a-subgraph.mdx @@ -2,13 +2,13 @@ title: Deprecating a Subgraph --- -So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: +So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: You've come to the right place! Follow the steps below: 1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) 2. Call 'deprecateSubgraph' with your own address as the first parameter 3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. -4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` -5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: +4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` +5. Voila! Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: Please note the following: - Curators will not be able to signal on the subgraph anymore - Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price From 88629f6200c3837835c99f42df7f847cce83fdde Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:23 -0500 Subject: [PATCH 059/432] New translations developer-faq.mdx (Spanish) --- pages/es/developer/developer-faq.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/developer/developer-faq.mdx b/pages/es/developer/developer-faq.mdx index 41449c60e5ab..b1fcab2b8ef8 100644 --- a/pages/es/developer/developer-faq.mdx +++ b/pages/es/developer/developer-faq.mdx @@ -44,7 +44,7 @@ docker pull graphprotocol/graph-node:latest Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). -### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. @@ -91,7 +91,7 @@ Yes, you should take a look at the optional start block feature to start indexin ### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? -Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +Yes! Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql From d5bb3f4576e98321e21690310a19afbaf30a11a3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:26 -0500 Subject: [PATCH 060/432] New translations developer-faq.mdx (Chinese Simplified) --- pages/zh/developer/developer-faq.mdx | 88 ++++++++++++++-------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/pages/zh/developer/developer-faq.mdx b/pages/zh/developer/developer-faq.mdx index 41449c60e5ab..58380c271633 100644 --- a/pages/zh/developer/developer-faq.mdx +++ b/pages/zh/developer/developer-faq.mdx @@ -2,35 +2,35 @@ title: Developer FAQs --- -### 1. Can I delete my subgraph? +### 1. 1. Can I delete my subgraph? It is not possible to delete subgraphs once they are created. -### 2. Can I change my subgraph name? +### 2. 2. Can I change my subgraph name? No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. -### 3. Can I change the GitHub account associated with my subgraph? +### 3. 3. Can I change the GitHub account associated with my subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +No. No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. Make sure to think of this carefully before you create your subgraph. -### 4. Am I still able to create a subgraph if my smart contracts don't have events? +### 4. 4. Am I still able to create a subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. +If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. Although this is not recommended as performance will be significantly slower. -### 5. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. 5. Is it possible to deploy one subgraph with the same name for multiple networks? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +You will need separate names for multiple networks. You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. How are templates different from data sources? +### 6. 6. How are templates different from data sources? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources on the fly, while your subgraph is indexing. Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). -### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 7. 7. How do I make sure I'm using the latest version of graph-node for my local deployments? You can run the following command: @@ -40,31 +40,31 @@ docker pull graphprotocol/graph-node:latest **NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. -### 8. How do I call a contract function or access a public state variable from my subgraph mappings? +### 8. 8. How do I call a contract function or access a public state variable from my subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). -### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 9. 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? -Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. +Unfortunately this is currently not possible. Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. -### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? +### 10. 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 11. 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 12. 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? +### 13. 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? -Yes. You can do this by importing `graph-ts` as per the example below: +Yes. Yes. You can do this by importing `graph-ts` as per the example below: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,31 +73,31 @@ dataSource.network() dataSource.address() ``` -### 14. Do you support block and call handlers on Rinkeby? +### 14. 14. Do you support block and call handlers on Rinkeby? -On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. +On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. Call handlers are not supported for the time being. -### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? +### 15. 15. Can I import ethers.js or other JS libraries into my subgraph mappings? -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Not currently, as mappings are written in AssemblyScript. Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. -### 16. Is it possible to specifying what block to start indexing on? +### 16. 16. Is it possible to specifying what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks +Yes. Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks In most cases we suggest using the block in which the contract was created: Start blocks -### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. +### 17. 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. My subgraph is taking a very long time to sync. Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) -### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? +### 18. 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? -Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +Yes! Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. What networks are supported by The Graph? +### 19. 19. What networks are supported by The Graph? The graph-node supports any EVM-compatible JSON RPC API chain. @@ -135,38 +135,38 @@ In the Hosted Service, the following networks are supported: There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). -### 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 20. 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 21. Is this possible to use Apollo Federation on top of graph-node? +### 21. 21. Is this possible to use Apollo Federation on top of graph-node? -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. +Federation is not supported yet, although we do want to support it in the future. Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. 22. Is there a limit to how many objects The Graph can return per query? -By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: +By default query responses are limited to 100 items per collection. By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a host name, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -### 24. Where do I go to find my current subgraph on the Hosted Service? +### 24. 24. Where do I go to find my current subgraph on the Hosted Service? -Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) +Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) You can find it [here.](https://thegraph.com/hosted-service) -### 25. Will the Hosted Service start charging query fees? +### 25. 25. Will the Hosted Service start charging query fees? -The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. +The Graph will never charge for the Hosted Service. The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. -### 26. When will the Hosted Service be shut down? +### 26. 26. When will the Hosted Service be shut down? If and when there are plans to do this, the community will be notified well ahead of time with considerations made for any subgraphs built on the Hosted Service. -### 27. How do I upgrade a subgraph on mainnet? +### 27. 27. How do I upgrade a subgraph on mainnet? -If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. From 1045f72a961c8a2b01b9e3719890c5f65992d5e8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:29 -0500 Subject: [PATCH 061/432] New translations distributed-systems.mdx (Chinese Simplified) --- pages/zh/developer/distributed-systems.mdx | 54 ++++++++++++++++++++-- 1 file changed, 49 insertions(+), 5 deletions(-) diff --git a/pages/zh/developer/distributed-systems.mdx b/pages/zh/developer/distributed-systems.mdx index 894fcbe2e18b..ae06b86555f7 100644 --- a/pages/zh/developer/distributed-systems.mdx +++ b/pages/zh/developer/distributed-systems.mdx @@ -21,17 +21,17 @@ Consider this example of what may occur if a client polls an Indexer for the lat From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. ## Polling for updated data -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: Here is an example: ```javascript /// Updates the protocol.paused variable to the latest @@ -42,6 +42,17 @@ async function updateProtocolPaused() { // same as leaving out that argument. let minBlock = 0 + for (;;) { + // Schedule a promise that will be ready once + // the next Ethereum block will likely be available. + /// Updates the protocol.paused variable to the latest +/// known value in a loop by fetching it using The Graph. +async function updateProtocolPaused() { + // It's ok to start with minBlock at 0. The query will be served + // using the latest block available. Setting minBlock to 0 is the + // same as leaving out that argument. + let minBlock = 0 + for (;;) { // Schedule a promise that will be ready once // the next Ethereum block will likely be available. @@ -71,11 +82,17 @@ async function updateProtocolPaused() { await nextBlock } } + console.log(response.protocol.paused) + + // Sleep to wait for the next block + await nextBlock + } +} ``` ## Fetching a set of related items -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. @@ -86,6 +103,14 @@ async function getDomainNames() { let pages = 5 const perPage = 1000 + // The first query will get the first page of results and also get the block + // hash so that the remainder of the queries are consistent with the first. + /// Gets a list of domain names from a single block using pagination +async function getDomainNames() { + // Set a cap on the maximum number of items to pull. + let pages = 5 + const perPage = 1000 + // The first query will get the first page of results and also get the block // hash so that the remainder of the queries are consistent with the first. let query = ` @@ -126,6 +151,25 @@ async function getDomainNames() { } } return result +} + while (data.domains.length == perPage && --pages) { + let lastID = data.domains[data.domains.length - 1].id + query = ` + { + domains(first: ${perPage}, where: { id_gt: "${lastID}" }, block: { hash: "${blockHash}" }) { + name + id + } + }` + + data = await graphql(query) + + // Accumulate domain names into the result + for (domain of data.domains) { + result.push(domain.name) + } + } + return result } ``` From eef9a6329c99798b96247ae3cf7feed0f8c69452 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:35 -0500 Subject: [PATCH 062/432] New translations assemblyscript-api.mdx (Japanese) --- pages/ja/developer/assemblyscript-api.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/developer/assemblyscript-api.mdx b/pages/ja/developer/assemblyscript-api.mdx index a609e6cd657f..16e11164366f 100644 --- a/pages/ja/developer/assemblyscript-api.mdx +++ b/pages/ja/developer/assemblyscript-api.mdx @@ -567,7 +567,7 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. +**Note:** `ipfs.cat` is not deterministic at the moment. Due to this, it's always worth checking the result for `null`. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: From 8aa1d5762da34044cb6985aadd9889fdd991b1cf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:37 -0500 Subject: [PATCH 063/432] New translations introduction.mdx (Chinese Simplified) --- pages/zh/about/introduction.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/about/introduction.mdx b/pages/zh/about/introduction.mdx index 5f840c040400..5d579bbc364f 100644 --- a/pages/zh/about/introduction.mdx +++ b/pages/zh/about/introduction.mdx @@ -6,25 +6,25 @@ This page will explain what The Graph is and how you can get started. ## What The Graph Is -The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. +The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. It makes it possible to query data that is difficult to query directly. Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. +To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. **Indexing blockchain data is really, really hard.** Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. -The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). ## How The Graph Works -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. @@ -37,11 +37,11 @@ The flow follows these steps: 1. A decentralized application adds data to Ethereum through a transaction on a smart contract. 2. The smart contract emits one or more events while processing the transaction. 3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. -Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. From dcb380e146986dbf89d7d60667bdb7bc9883d148 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:40 -0500 Subject: [PATCH 064/432] New translations network.mdx (Chinese Simplified) --- pages/zh/about/network.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/zh/about/network.mdx b/pages/zh/about/network.mdx index b19f08d12bc7..10d1d992fcab 100644 --- a/pages/zh/about/network.mdx +++ b/pages/zh/about/network.mdx @@ -2,14 +2,14 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. > GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) ## Overview -The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. Consumers use the applications and consume the data. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. From e2f8680ddce45b942c9b4e5774b3f09e12a1fbe6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:41 -0500 Subject: [PATCH 065/432] New translations assemblyscript-api.mdx (Spanish) --- pages/es/developer/assemblyscript-api.mdx | 29 +++++++++++++---------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/pages/es/developer/assemblyscript-api.mdx b/pages/es/developer/assemblyscript-api.mdx index a609e6cd657f..c070c682f6e6 100644 --- a/pages/es/developer/assemblyscript-api.mdx +++ b/pages/es/developer/assemblyscript-api.mdx @@ -68,7 +68,7 @@ import { ByteArray } from '@graphprotocol/graph-ts' _Construction_ - `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `x.times(y: BigInt): BigInt` – can be written as `x * y`. _Type conversions_ @@ -126,9 +126,9 @@ The `BigInt` class has the following API: _Construction_ - `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. - `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. _Type conversions_ @@ -327,9 +327,11 @@ The following example illustrates this. Given a subgraph schema like ```graphql type Transfer @entity { - from: Bytes! - to: Bytes! - amount: BigInt! + from: + Bytes! + to: + Bytes! + amount: } ``` @@ -369,7 +371,6 @@ class Block { receiptsRoot: Bytes number: BigInt gasUsed: BigInt - gasLimit: BigInt timestamp: BigInt difficulty: BigInt totalDifficulty: BigInt @@ -383,7 +384,6 @@ class Transaction { from: Address to: Address | null value: BigInt - gasLimit: BigInt gasPrice: BigInt input: Bytes nonce: BigInt @@ -439,14 +439,16 @@ Data can be encoded and decoded according to Ethereum's ABI encoding format usin import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' let tupleArray: Array = [ - ethereum.Value.fromAddress(Address.fromString('0x0000000000000000000000000000000000000420')), - ethereum.Value.fromUnsignedBigInt(BigInt.fromI32(62)), + ethereum. Value.fromAddress(Address.fromString('0x0000000000000000000000000000000000000420')), + ethereum. Value.fromUnsignedBigInt(BigInt.fromI32(62)), ] -let tuple = tupleArray as ethereum.Tuple +let tuple = tupleArray as ethereum. Tuple let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! +let decoded = ethereum.decode('(address,uint256)', encoded) + let decoded = ethereum.decode('(address,uint256)', encoded) ``` @@ -567,7 +569,7 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. +**Note:** `ipfs.cat` is not deterministic at the moment. Due to this, it's always worth checking the result for `null`. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: @@ -630,7 +632,8 @@ The `JSONValue` class provides a way to pull values out of an arbitrary JSON doc ```typescript let value = json.fromBytes(...) -if (value.kind == JSONValueKind.BOOL) { +if (value.kind == JSONValueKind. +BOOL) { ... } ``` From ffe34d191469ce7ec5d45f2ab38b96a56b39d600 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:42 -0500 Subject: [PATCH 066/432] New translations assemblyscript-api.mdx (Arabic) --- pages/ar/developer/assemblyscript-api.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/ar/developer/assemblyscript-api.mdx b/pages/ar/developer/assemblyscript-api.mdx index a609e6cd657f..add60911b969 100644 --- a/pages/ar/developer/assemblyscript-api.mdx +++ b/pages/ar/developer/assemblyscript-api.mdx @@ -438,14 +438,14 @@ Data can be encoded and decoded according to Ethereum's ABI encoding format usin ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' -let tupleArray: Array = [ - ethereum.Value.fromAddress(Address.fromString('0x0000000000000000000000000000000000000420')), - ethereum.Value.fromUnsignedBigInt(BigInt.fromI32(62)), +let tupleArray: Array = [ + ethereum. Value.fromAddress(Address.fromString('0x0000000000000000000000000000000000000420')), + ethereum. Value.fromUnsignedBigInt(BigInt.fromI32(62)), ] -let tuple = tupleArray as ethereum.Tuple +let tuple = tupleArray as ethereum. Tuple -let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! +let encoded = ethereum.encode(ethereum. Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` @@ -567,7 +567,7 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. +**Note:** `ipfs.cat` is not deterministic at the moment. Due to this, it's always worth checking the result for `null`. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: From 804eccf716dadac92bd03e3e72b9f77102ef946a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:44 -0500 Subject: [PATCH 067/432] New translations assemblyscript-api.mdx (Korean) --- pages/ko/developer/assemblyscript-api.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/developer/assemblyscript-api.mdx b/pages/ko/developer/assemblyscript-api.mdx index a609e6cd657f..16e11164366f 100644 --- a/pages/ko/developer/assemblyscript-api.mdx +++ b/pages/ko/developer/assemblyscript-api.mdx @@ -567,7 +567,7 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. +**Note:** `ipfs.cat` is not deterministic at the moment. Due to this, it's always worth checking the result for `null`. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: From e575aa65a722ec4849d7145a33e7778938e31484 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:46 -0500 Subject: [PATCH 068/432] New translations create-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/create-subgraph-hosted.mdx | 196 +++++++++++------- 1 file changed, 121 insertions(+), 75 deletions(-) diff --git a/pages/zh/developer/create-subgraph-hosted.mdx b/pages/zh/developer/create-subgraph-hosted.mdx index 6b235e379634..86b0d3df18d8 100644 --- a/pages/zh/developer/create-subgraph-hosted.mdx +++ b/pages/zh/developer/create-subgraph-hosted.mdx @@ -2,9 +2,9 @@ title: Create a Subgraph --- -Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. +Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. -The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. +The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. ## Supported Networks @@ -44,7 +44,7 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: - `aurora` - `aurora-testnet` -The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. +The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. Remember that you will **not be able** to publish a subgraph that indexes a non-mainnet network to the decentralized Graph Network in [Subgraph Studio](/studio/subgraph-studio). @@ -65,17 +65,17 @@ The `` is the ID of your subgraph in Subgraph Studio, it can be f ## From An Example Subgraph -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: The following command does this: ``` graph init --studio ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. ## The Subgraph Manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). For the example subgraph, `subgraph.yaml` is: @@ -120,17 +120,17 @@ dataSources: The important entries to update for the manifest are: -- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. +- `description`: a human-readable description of what the subgraph is. `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. This is also displayed by the Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. The address is optional; omitting it allows to index matching events from all contracts. -- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. In most cases we suggest using the block in which the contract was created. -- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. The schema for each entity is defined in the the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. @@ -138,9 +138,9 @@ The important entries to update for the manifest are: - `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single subgraph can index data from multiple smart contracts. A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. The triggers for a data source within a block are ordered using the following process: @@ -152,21 +152,21 @@ These ordering rules are subject to change. ### Getting The ABIs -The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: +The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. - If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or using solc to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. Make sure you have the right ABI, otherwise running your subgraph will fail. ## The GraphQL Schema -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. ## Defining Entities Before defining entities, it is important to take a step back and think about how your data is structured and linked. All queries will be made against the data model defined in the subgraph schema and the entities indexed by the subgraph. Because of this, it is good to define the subgraph schema in a way that matches the needs of your dapp. It may be useful to imagine entities as "objects containing data", rather than as events or functions. -With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. +With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. Each type that should be an entity is required to be annotated with an `@entity` directive. ### Good Example @@ -184,7 +184,7 @@ type Gravatar @entity { ### Bad Example -The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. +The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. It is not recommended to map events or function calls to entities 1:1. ```graphql type GravatarAccepted @entity { @@ -199,18 +199,29 @@ type GravatarDeclined @entity { owner: Bytes displayName: String imageUrl: String +} + type Gravatar @entity { + id: ID! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} + owner: Bytes + displayName: String + imageUrl: String } ``` ### Optional and Required Fields -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. +Each entity must have an `id` field, which is of type `ID!` (string). Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. ### Built-In Scalar Types @@ -218,19 +229,19 @@ Each entity must have an `id` field, which is of type `ID!` (string). The `id` f We support the following scalars in our GraphQL API: -| Type | Description | -| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Type | Description | +| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums -You can also create enums within a schema. Enums have the following syntax: +You can also create enums within a schema. Enums have the following syntax: Enums have the following syntax: ```graphql enum TokenStatus { @@ -240,13 +251,13 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). #### Entity Relationships -An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. Relationships are defined on entities just like any other field except that the type specified is that of another entity. @@ -256,6 +267,8 @@ Define a `Transaction` entity type with an optional one-to-one relationship with ```graphql type Transaction @entity { + id: ID! + type Transaction @entity { id: ID! transactionReceipt: TransactionReceipt } @@ -263,6 +276,8 @@ type Transaction @entity { type TransactionReceipt @entity { id: ID! transaction: Transaction +} + transaction: Transaction } ``` @@ -271,6 +286,8 @@ type TransactionReceipt @entity { Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: ```graphql +type Token @entity { + id: ID! type Token @entity { id: ID! } @@ -279,6 +296,9 @@ type TokenBalance @entity { id: ID! amount: Int! token: Token! +} + amount: Int! + token: Token! } ``` @@ -286,7 +306,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -294,6 +314,8 @@ We can make the balances for a token accessible from the token by deriving a `to ```graphql type Token @entity { + id: ID! + tokenBalances: [TokenBalance!]! type Token @entity { id: ID! tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") } @@ -302,16 +324,19 @@ type TokenBalance @entity { id: ID! amount: Int! token: Token! +} + amount: Int! + token: Token! } ``` #### Many-To-Many Relationships -For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. #### Example -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. ```graphql type Organization @entity { @@ -339,11 +364,17 @@ type Organization @entity { type User @entity { id: ID! name: String! - organizations: [UserOrganization!] @derivedFrom(field: "organization") + organizations: [UserOrganization!] type Organization @entity { + id: ID! + name: String! + members: [User!]! } -type UserOrganization @entity { - id: ID! # Set to `${user.id}-${organization.id}` +type User @entity { + id: ID! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} # Set to `${user.id}-${organization.id}` user: User! organization: Organization! } @@ -368,21 +399,23 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: This is illustrated in the example below: ```graphql type MyFirstEntity @entity { "unique identifier and primary key of the entity" id: ID! address: Bytes! +} + address: Bytes! } ``` ## Defining Fulltext Search Fields -Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. +Fulltext search queries filter and rank entities based on a text search input. Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. -A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. @@ -404,10 +437,18 @@ type Band @entity { labels: [Label!]! discography: [Album!]! members: [Musician!]! +} + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. ```graphql query { @@ -424,7 +465,7 @@ query { ### Languages supported -Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". Supported language dictionaries: @@ -458,9 +499,9 @@ Supported algorithms for ordering results: ## Writing Mappings -The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. +The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. -For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: @@ -489,19 +530,19 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. ### Recommended IDs for Creating New Entities -Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. +Every entity has to have an `id` that is unique among all entities of the same type. Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. - `event.params.id.toHex()` - `event.transaction.from.toHex()` - `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` -We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. +We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. ## Code Generation @@ -523,7 +564,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with ```javascript import { @@ -535,23 +576,23 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. It must also be performed at least once before building or deploying the subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. ## Data Source Templates -A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. ### Data Source for the Main Contract -First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. +First, you define a regular data source for the main contract. First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. ```yaml dataSources: @@ -578,9 +619,13 @@ dataSources: ### Data Source Templates for Dynamically Created Contracts -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. ```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... dataSources: - kind: ethereum/contract name: Factory @@ -614,7 +659,7 @@ templates: ### Instantiating a Data Source Template -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. ```typescript import { Exchange } from '../generated/templates' @@ -632,7 +677,7 @@ export function handleNewExchange(event: NewExchange): void { ### Data Source Context -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: ```typescript import { Exchange } from '../generated/templates' @@ -657,7 +702,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -695,7 +740,7 @@ While events provide an effective way to collect relevant changes to the state o Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. +> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. Call handlers currently depend on the Parity tracing API and these networks do not support it. ### Defining a Call Handler @@ -724,11 +769,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +The `function` is the normalized function signature to filter calls by. The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -743,11 +788,11 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. +In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. ### Supported Filters @@ -758,7 +803,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. +The absense of a filter for a block handler will ensure that the handler is called every block. The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. ```yaml dataSources: @@ -787,7 +832,7 @@ dataSources: ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -810,7 +855,7 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +An event will only be triggered when both the signature and topic 0 match. An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. ## Experimental features @@ -840,7 +885,7 @@ Note that using a feature without declaring it will incur in a **validation erro A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. -Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). > **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. @@ -850,7 +895,7 @@ In order to make this easy for subgraph developers, The Graph team wrote a tool ### Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. +Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. > **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. @@ -864,7 +909,7 @@ features: ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -898,24 +943,25 @@ If the subgraph encounters an error that query will return both the data and a g ### Grafting onto Existing Subgraphs -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. -> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** Grafting requires that the Indexer has indexed the base subgraph. **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: ```yaml description: ... +description: ... graft: base: Qm... # Subgraph ID of base subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. -Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: +The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types From e8299f7dcb70d3bf0945b3bec9e2338614440118 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:48 -0500 Subject: [PATCH 069/432] New translations assemblyscript-api.mdx (Chinese Simplified) --- pages/zh/developer/assemblyscript-api.mdx | 90 +++++++++++++---------- 1 file changed, 52 insertions(+), 38 deletions(-) diff --git a/pages/zh/developer/assemblyscript-api.mdx b/pages/zh/developer/assemblyscript-api.mdx index a609e6cd657f..a29d1314de5b 100644 --- a/pages/zh/developer/assemblyscript-api.mdx +++ b/pages/zh/developer/assemblyscript-api.mdx @@ -4,16 +4,16 @@ title: AssemblyScript API > Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: Two kinds of APIs are available out of the box: - the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and - code generated from subgraph files by `graph codegen`. -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. ## Installation -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: +Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: All that is required to install these dependencies is to run one of the following commands: ```sh yarn install # Yarn @@ -41,7 +41,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. +The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. The current mapping API version is 0.0.6. | Version | Release notes | |:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | @@ -68,15 +68,15 @@ import { ByteArray } from '@graphprotocol/graph-ts' _Construction_ - `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. Prefixing with `0x` is optional. _Type conversions_ - `toHexString(): string` - Converts to a hex string prefixed with `0x`. - `toString(): string` - Interprets the bytes as a UTF-8 string. - `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. Throws in case of overflow. +- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. Throws in case of overflow. _Operators_ @@ -119,7 +119,7 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. The `BigInt` class has the following API: @@ -127,14 +127,14 @@ _Construction_ - `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. - `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. If your input is big-endian, call `.reverse()` first. _Type conversions_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. - `x.toString(): string` – turns `BigInt` into a decimal number string. -- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. +- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. It's a good idea to first check `x.isI32()`. - `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. _Math_ @@ -167,7 +167,7 @@ _Math_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). The `TypedMap` class has the following API: @@ -183,7 +183,7 @@ The `TypedMap` class has the following API: import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. +`Bytes` is used to represent arbitrary-length arrays of bytes. `Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: @@ -211,7 +211,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -241,9 +241,9 @@ export function handleTransfer(event: TransferEvent): void { } ``` -When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -259,16 +259,16 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. It may thus be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. See the next section for the two ways of updating existing entities. #### Updating existing entities There are two ways to update an existing entity: 1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. If the entity already exists, the changes are merged into it. Changing properties is straight forward in most cases, thanks to the generated property setters: @@ -277,6 +277,8 @@ let transfer = new Transfer(id) transfer.from = ... transfer.to = ... transfer.amount = ... +transfer.to = ... +transfer.amount = ... ``` It is also possible to unset properties with one of the following two instructions: @@ -286,9 +288,9 @@ transfer.from.unset() transfer.from = null ``` -This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. Two examples would be `owner: Bytes` or `amount: BigInt`. -Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. ```typescript // This won't work @@ -304,11 +306,13 @@ entity.save() #### Removing entities from the store -There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: +There is currently no way to remove an entity via the generated types. There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' ... +import { store } from '@graphprotocol/graph-ts' +... let id = event.transaction.hash.toHex() store.remove('Transfer', id) ``` @@ -319,17 +323,20 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a subgraph schema like Given a subgraph schema like ```graphql type Transfer @entity { from: Bytes! to: Bytes! amount: BigInt! +} + to: Bytes! + amount: BigInt! } ``` @@ -346,7 +353,7 @@ transfer.save() #### Events and Block/Transaction Data -Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): ```typescript class Event { @@ -392,9 +399,9 @@ class Transaction { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. -A common pattern is to access the contract from which an event originates. This is achieved with the following code: +A common pattern is to access the contract from which an event originates. This is achieved with the following code: This is achieved with the following code: ```typescript // Import the generated contract class @@ -411,13 +418,13 @@ export function handleTransfer(event: Transfer) { } ``` -As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. +As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. For public state variables a method with the same name is created automatically. Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -447,6 +454,8 @@ let tuple = tupleArray as ethereum.Tuple let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! +let decoded = ethereum.decode('(address,uint256)', encoded) + let decoded = ethereum.decode('(address,uint256)', encoded) ``` @@ -462,7 +471,7 @@ For more information: import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -472,7 +481,7 @@ The `log` API includes the following functions: - `log.error(fmt: string, args: Array): void` - logs an error message. - `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. -The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. +The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) @@ -508,7 +517,7 @@ export function handleSomeEvent(event: SomeEvent): void { #### Logging multiple entries from an existing array -Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. +Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. ```typescript let myArray = ['A', 'B', 'C'] @@ -543,6 +552,9 @@ export function handleSomeEvent(event: SomeEvent): void { event.block.hash.toHexString(), // "0x..." event.transaction.hash.toHexString(), // "0x..." ]) +} + event.transaction.hash.toHexString(), // "0x..." + ]) } ``` @@ -552,7 +564,7 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. Given an IPFS hash or path, reading a file from IPFS is done as follows: @@ -569,7 +581,7 @@ let data = ipfs.cat(path) **Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -599,9 +611,9 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. ### Crypto API @@ -609,7 +621,7 @@ On success, `ipfs.map` returns `void`. If any invocation of the callback causes import { crypto } from '@graphprotocol/graph-ts' ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: +The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: Right now, there is only one: - `crypto.keccak256(input: ByteArray): ByteArray` @@ -626,13 +638,15 @@ JSON data can be parsed using the `json` API: - `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` - `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: ```typescript let value = json.fromBytes(...) +let value = json.fromBytes(...) if (value.kind == JSONValueKind.BOOL) { ... } +} ``` In addition, there is a method to check if the value is `null`: From ec6eee8726e20e61c58ffbfe24ca16fc9cf5529e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:49 -0500 Subject: [PATCH 070/432] New translations assemblyscript-migration-guide.mdx (Spanish) --- .../assemblyscript-migration-guide.mdx | 68 ++++++++++++------- 1 file changed, 42 insertions(+), 26 deletions(-) diff --git a/pages/es/developer/assemblyscript-migration-guide.mdx b/pages/es/developer/assemblyscript-migration-guide.mdx index 2db90a608110..5cb77a52422e 100644 --- a/pages/es/developer/assemblyscript-migration-guide.mdx +++ b/pages/es/developer/assemblyscript-migration-guide.mdx @@ -2,7 +2,7 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 🎉 That will enable subgraph developers to use newer features of the AS language and standard library. @@ -48,11 +48,10 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ```yaml ... -dataSources: +... + dataSources: ... - mapping: - ... - apiVersion: 0.0.6 + mapping: ... ``` @@ -82,10 +81,9 @@ npm install --save @graphprotocol/graph-ts@latest On the older version of AssemblyScript, you could create code like this: ```typescript -function load(): Value | null { ... } +let maybeValue = load()! // breaks in runtime if value is null -let maybeValue = load(); -maybeValue.aMethod(); +maybeValue.aMethod() ``` However on the newer version, because the value is nullable, it requires you to check, like this: @@ -101,9 +99,10 @@ if (maybeValue) { Or force it like this: ```typescript -let maybeValue = load()! // breaks in runtime if value is null +function load(): Value | null { ... } -maybeValue.aMethod() +let maybeValue = load(); +maybeValue.aMethod(); ``` If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. @@ -253,6 +252,16 @@ let somethingOrElse = something ? something : 'else' let somethingOrElse +if (something) { + somethingOrElse = something +} else { + somethingOrElse = 'else' +} something : 'else' + +// or + +let somethingOrElse + if (something) { somethingOrElse = something } else { @@ -263,14 +272,8 @@ if (something) { However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: ```typescript -class Container { - data: string | null -} - -let container = new Container() -container.data = 'data' - -let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile +ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. let somethingOrElse: string = container.data ? container.data : "else"; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` Which outputs this error: @@ -278,7 +281,14 @@ Which outputs this error: ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. - let somethingOrElse: string = container.data ? container.data : "else"; + class Container { + data: string | null +} + +let container = new Container() +container.data = 'data' + +let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: @@ -293,7 +303,7 @@ container.data = 'data' let data = container.data -let somethingOrElse: string = data ? data : 'else' // compiles just fine :) +let somethingOrElse: string = data ? data : 'else' // compiles just fine :) data : 'else' // compiles just fine :) ``` ### Operator overloading with property access @@ -357,6 +367,7 @@ Also if you have nullable properties in a GraphQL entity, like this: ```graphql type Total @entity { id: ID! + amount: amount: BigInt } ``` @@ -390,8 +401,9 @@ Or you can just change your GraphQL schema to not use a nullable type for this p ```graphql type Total @entity { - id: ID! - amount: BigInt! + id: + ID! + amount: } ``` @@ -449,11 +461,13 @@ Now you no longer can define fields in your types that are Non-Nullable Lists. I ```graphql type Something @entity { - id: ID! + id: +ID! } type MyEntity @entity { - id: ID! + id: + ID! invalidField: [Something]! # no longer valid } ``` @@ -462,11 +476,13 @@ You'll have to add an `!` to the member of the List type, like this: ```graphql type Something @entity { - id: ID! + id: +ID! } type MyEntity @entity { - id: ID! + id: + ID! invalidField: [Something!]! # valid } ``` From 5ce20805a74bc77dbfc03f98593771e998edda63 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:50 -0500 Subject: [PATCH 071/432] New translations assemblyscript-migration-guide.mdx (Arabic) --- pages/ar/developer/assemblyscript-migration-guide.mdx | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/pages/ar/developer/assemblyscript-migration-guide.mdx b/pages/ar/developer/assemblyscript-migration-guide.mdx index 2db90a608110..922351f8cb2b 100644 --- a/pages/ar/developer/assemblyscript-migration-guide.mdx +++ b/pages/ar/developer/assemblyscript-migration-guide.mdx @@ -48,11 +48,10 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ```yaml ... -dataSources: +... + dataSources: ... - mapping: - ... - apiVersion: 0.0.6 + mapping: ... ``` From ec84fc3cbbb2432f600a2f443fab066ebf9de9d5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:51 -0500 Subject: [PATCH 072/432] New translations assemblyscript-migration-guide.mdx (Japanese) --- pages/ja/developer/assemblyscript-migration-guide.mdx | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/pages/ja/developer/assemblyscript-migration-guide.mdx b/pages/ja/developer/assemblyscript-migration-guide.mdx index 2db90a608110..922351f8cb2b 100644 --- a/pages/ja/developer/assemblyscript-migration-guide.mdx +++ b/pages/ja/developer/assemblyscript-migration-guide.mdx @@ -48,11 +48,10 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ```yaml ... -dataSources: +... + dataSources: ... - mapping: - ... - apiVersion: 0.0.6 + mapping: ... ``` From 1167026685f6cbeca8fbd0253f07f8165a1e48fd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:52 -0500 Subject: [PATCH 073/432] New translations assemblyscript-migration-guide.mdx (Korean) --- pages/ko/developer/assemblyscript-migration-guide.mdx | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/pages/ko/developer/assemblyscript-migration-guide.mdx b/pages/ko/developer/assemblyscript-migration-guide.mdx index 2db90a608110..922351f8cb2b 100644 --- a/pages/ko/developer/assemblyscript-migration-guide.mdx +++ b/pages/ko/developer/assemblyscript-migration-guide.mdx @@ -48,11 +48,10 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ```yaml ... -dataSources: +... + dataSources: ... - mapping: - ... - apiVersion: 0.0.6 + mapping: ... ``` From af22cc42df48d4cd894595da94e462ce73085dc1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:53 -0500 Subject: [PATCH 074/432] New translations assemblyscript-migration-guide.mdx (Chinese Simplified) --- .../assemblyscript-migration-guide.mdx | 59 +++++++++++++++---- 1 file changed, 49 insertions(+), 10 deletions(-) diff --git a/pages/zh/developer/assemblyscript-migration-guide.mdx b/pages/zh/developer/assemblyscript-migration-guide.mdx index 2db90a608110..592fcdee6d94 100644 --- a/pages/zh/developer/assemblyscript-migration-guide.mdx +++ b/pages/zh/developer/assemblyscript-migration-guide.mdx @@ -2,11 +2,11 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 That will enable subgraph developers to use newer features of the AS language and standard library. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 > Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. @@ -48,6 +48,11 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ```yaml ... +dataSources: + ... + mapping: + ... + ... dataSources: ... mapping: @@ -101,12 +106,12 @@ if (maybeValue) { Or force it like this: ```typescript -let maybeValue = load()! // breaks in runtime if value is null +let maybeValue = load()! let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. ### Variable Shadowing @@ -135,6 +140,9 @@ By doing the upgrade on your subgraph, sometimes you might get errors like these ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. if (decimals == null) { ~~~~ + in src/mappings/file.ts(41,21) + if (decimals == null) { + ~~~~ in src/mappings/file.ts(41,21) ``` To solve you can simply change the `if` statement to something like this: @@ -253,6 +261,16 @@ let somethingOrElse = something ? something : 'else' let somethingOrElse +if (something) { + somethingOrElse = something +} else { + somethingOrElse = 'else' +} something : 'else' + +// or + +let somethingOrElse + if (something) { somethingOrElse = something } else { @@ -270,7 +288,7 @@ class Container { let container = new Container() container.data = 'data' -let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile +let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile container.data : 'else' // doesn't compile ``` Which outputs this error: @@ -280,6 +298,9 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + let somethingOrElse: string = container.data ? container.data : "else"; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: @@ -293,7 +314,7 @@ container.data = 'data' let data = container.data -let somethingOrElse: string = data ? data : 'else' // compiles just fine :) +let somethingOrElse: string = data ? data : 'else' // compiles just fine :) data : 'else' // compiles just fine :) ``` ### Operator overloading with property access @@ -302,6 +323,10 @@ If you try to sum (for example) a nullable type (from a property access) with a ```typescript class BigInt extends Uint8Array { + @operator('+') + plus(other: BigInt): BigInt { + // ... + class BigInt extends Uint8Array { @operator('+') plus(other: BigInt): BigInt { // ... @@ -373,7 +398,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: So you either initialize it first: ```typescript let total = Total.load('latest') @@ -392,6 +417,8 @@ Or you can just change your GraphQL schema to not use a nullable type for this p type Total @entity { id: ID! amount: BigInt! +} + amount: BigInt! } ``` @@ -445,13 +472,19 @@ export class Something { This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: +Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: If you have a schema like this: ```graphql type Something @entity { id: ID! } +type MyEntity @entity { + id: ID! + invalidField: [Something]! # no longer valid +} +} + type MyEntity @entity { id: ID! invalidField: [Something]! # no longer valid @@ -465,6 +498,12 @@ type Something @entity { id: ID! } +type MyEntity @entity { + id: ID! + invalidField: [Something]! # no longer valid +} +} + type MyEntity @entity { id: ID! invalidField: [Something!]! # valid @@ -478,7 +517,7 @@ This changed because of nullability differences between AssemblyScript versions, - Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) - Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- The result of a `**` binary operation is now the common denominator integer if both operands are integers. The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) - Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) - Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) From f04da92891c3de09f6a1b52438f57c04a5bb0ed3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:55 -0500 Subject: [PATCH 075/432] New translations create-subgraph-hosted.mdx (Spanish) --- pages/es/developer/create-subgraph-hosted.mdx | 103 ++++++++++-------- 1 file changed, 58 insertions(+), 45 deletions(-) diff --git a/pages/es/developer/create-subgraph-hosted.mdx b/pages/es/developer/create-subgraph-hosted.mdx index 6b235e379634..24436c8da079 100644 --- a/pages/es/developer/create-subgraph-hosted.mdx +++ b/pages/es/developer/create-subgraph-hosted.mdx @@ -175,6 +175,7 @@ The `Gravatar` entity below is structured around a Gravatar object and is a good ```graphql type Gravatar @entity { id: ID! + owner: owner: Bytes displayName: String imageUrl: String @@ -188,7 +189,7 @@ The example `GravatarAccepted` and `GravatarDeclined` entities below are based a ```graphql type GravatarAccepted @entity { - id: ID! + id: owner: Bytes displayName: String imageUrl: String @@ -257,12 +258,14 @@ Define a `Transaction` entity type with an optional one-to-one relationship with ```graphql type Transaction @entity { id: ID! + ID! transactionReceipt: TransactionReceipt } type TransactionReceipt @entity { id: ID! - transaction: Transaction + transaction: + Transaction } ``` @@ -272,13 +275,16 @@ Define a `TokenBalance` entity type with a required one-to-many relationship wit ```graphql type Token @entity { - id: ID! + id: +ID! } type TokenBalance @entity { - id: ID! - amount: Int! - token: Token! + id: + ID! + amount: + Int! + token: } ``` @@ -294,14 +300,17 @@ We can make the balances for a token accessible from the token by deriving a `to ```graphql type Token @entity { - id: ID! + id: + ID! tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") } type TokenBalance @entity { - id: ID! - amount: Int! - token: Token! + id: + ID! + amount: + Int! + token: } ``` @@ -315,15 +324,16 @@ Define a reverse lookup from a `User` entity type to an `Organization` entity ty ```graphql type Organization @entity { - id: ID! + id: name: String! members: [User!]! +[User!]! } type User @entity { - id: ID! + id: name: String! - organizations: [Organization!]! @derivedFrom(field: "members") + organizations: [Organization!]! [Organization!]! @derivedFrom(field: "members") } ``` @@ -331,21 +341,23 @@ A more performant way to store this relationship is through a mapping table that ```graphql type Organization @entity { - id: ID! + id: name: String! - members: [UserOrganization]! @derivedFrom(field: "user") + members: [UserOrganization]! [UserOrganization]! @derivedFrom(field: "user") } type User @entity { - id: ID! + id: name: String! - organizations: [UserOrganization!] @derivedFrom(field: "organization") + organizations: [UserOrganization!] [UserOrganization!] @derivedFrom(field: "organization") } type UserOrganization @entity { id: ID! # Set to `${user.id}-${organization.id}` user: User! - organization: Organization! + organization: + Organization! +} } ``` @@ -373,8 +385,9 @@ As per GraphQL spec, comments can be added above schema entity attributes using ```graphql type MyFirstEntity @entity { "unique identifier and primary key of the entity" - id: ID! - address: Bytes! + id: + ID! + address: } ``` @@ -396,15 +409,21 @@ type _Schema_ ) type Band @entity { - id: ID! + id: name: String! - description: String! + ID! + name: String! + description: + String! bio: String wallet: Address - labels: [Label!]! - discography: [Album!]! + labels: + [Label!]! + discography: + [Album!]! members: [Musician!]! } +} ``` The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. @@ -583,33 +602,27 @@ Then, you add _data source templates_ to the manifest. These are identical to re ```yaml dataSources: - kind: ethereum/contract - name: Factory - # ... other source fields for the main contract ... -templates: - - name: Exchange - kind: ethereum/contract - network: mainnet + name: Gravity + network: dev source: - abi: Exchange + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: +Gravity mapping: kind: ethereum/events apiVersion: 0.0.6 language: wasm/assemblyscript - file: ./src/mappings/exchange.ts entities: - - Exchange + - Gravatar + - Transaction abis: - - name: Exchange - file: ./abis/exchange.json - eventHandlers: - - event: TokenPurchase(address,uint256,uint256) - handler: handleTokenPurchase - - event: EthPurchase(address,uint256,uint256) - handler: handleEthPurchase - - event: AddLiquidity(address,uint256,uint256) - handler: handleAddLiquidity - - event: RemoveLiquidity(address,uint256,uint256) - handler: handleRemoveLiquidity + - name: Gravity + file: ./abis/Gravity.json + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCallToContract + filter: + kind: call ``` ### Instantiating a Data Source Template @@ -792,7 +805,7 @@ The mapping function will receive an `ethereum.Block` as its only argument. Like ```typescript import { ethereum } from '@graphprotocol/graph-ts' -export function handleBlock(block: ethereum.Block): void { +export function handleBlock(block: ethereum. Block): void { let id = block.hash.toHex() let entity = new Block(id) entity.save() From 48c0312011527b67d6e484cad349ed4be6e73888 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:57 -0500 Subject: [PATCH 076/432] New translations create-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/create-subgraph-hosted.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index 6b235e379634..ae11501f7d6e 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -691,7 +691,7 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum. Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. @@ -743,7 +743,7 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum. Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. ## Block Handlers @@ -787,12 +787,12 @@ dataSources: ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum. Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' -export function handleBlock(block: ethereum.Block): void { +export function handleBlock(block: ethereum. Block): void { let id = block.hash.toHex() let entity = new Block(id) entity.save() From 7268518d34597b81ea54b34dd0a57602865bf254 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:29:59 -0500 Subject: [PATCH 077/432] New translations graphql-api.mdx (Chinese Simplified) --- pages/zh/developer/graphql-api.mdx | 34 +++++++++++++++--------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/pages/zh/developer/graphql-api.mdx b/pages/zh/developer/graphql-api.mdx index f9cb6214fcd9..d835b27e91b3 100644 --- a/pages/zh/developer/graphql-api.mdx +++ b/pages/zh/developer/graphql-api.mdx @@ -6,7 +6,7 @@ This guide explains the GraphQL Query API that is used for the Graph Protocol. ## Queries -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +In your subgraph schema you define types called `Entities`. In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. #### Examples @@ -36,7 +36,7 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. #### Example @@ -51,11 +51,11 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. +When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +Further, the `skip` parameter can be used to skip entities and paginate. Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. #### Example @@ -87,7 +87,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: For example, a client would retrieve a large number of tokens using this query: ```graphql { @@ -100,11 +100,11 @@ If a client needs to retrieve a large number of entities, it is much more perfor } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. You can filter on mulltiple values within the `where` parameter. #### Example @@ -154,13 +154,13 @@ _not_starts_with _not_ends_with ``` -Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. +Please note that some suffixes are only supported for specific types. Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. ### Time-travel queries -You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. Once a block can be considered final, the result of the query will not change. Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. @@ -198,9 +198,9 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: @@ -226,7 +226,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" The following query will return all blogs with variations of "decentralize" followed by "philosophy" ```graphql { @@ -239,7 +239,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +Combine fulltext operators to make more complex filters. Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". ```graphql { @@ -256,7 +256,7 @@ Combine fulltext operators to make more complex filters. With a pretext search o The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. > **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -264,4 +264,4 @@ GraphQL schemas generally define root types for `queries`, `subscriptions` and ` All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. From c6a9bd08f18b70c67b58c38dea9758c9891fb417 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:30:02 -0500 Subject: [PATCH 078/432] New translations matchstick.mdx (Spanish) --- pages/es/developer/matchstick.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/es/developer/matchstick.mdx b/pages/es/developer/matchstick.mdx index 3cf1ec761bb9..2f4e5e972ebe 100644 --- a/pages/es/developer/matchstick.mdx +++ b/pages/es/developer/matchstick.mdx @@ -44,13 +44,13 @@ export function createNewGravatarEvent( mockEvent.parameters ) newGravatarEvent.parameters = new Array() - let idParam = new ethereum.EventParam('id', ethereum.Value.fromI32(id)) - let addressParam = new ethereum.EventParam( + let idParam = new ethereum. EventParam('id', ethereum. Value.fromI32(id)) + let addressParam = new ethereum. EventParam( 'ownderAddress', - ethereum.Value.fromAddress(Address.fromString(ownerAddress)) + ethereum. Value.fromAddress(Address.fromString(ownerAddress)) ) - let displayNameParam = new ethereum.EventParam('displayName', ethereum.Value.fromString(displayName)) - let imageUrlParam = new ethereum.EventParam('imageUrl', ethereum.Value.fromString(imageUrl)) + let displayNameParam = new ethereum. EventParam('displayName', ethereum. Value.fromString(displayName)) + let imageUrlParam = new ethereum. EventParam('imageUrl', ethereum. Value.fromString(imageUrl)) newGravatarEvent.parameters.push(idParam) newGravatarEvent.parameters.push(addressParam) @@ -100,10 +100,10 @@ That's a lot to unpack! First off, an important thing to notice is that we're im - We're setting up our initial state and adding one custom Gravatar entity; - We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; - We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; -- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; +- We assert the state of the store. How does that work? How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; - And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. -There we go - we've created our first test! 👏 +There we go - we've created our first test! 👏 👏 ❗ **IMPORTANT:** _In order for the tests to work, we need to export the `runTests()` function in our mappings file. It won't be used there, but the export statement has to be there so that it can get picked up by Rust later when running the tests._ @@ -199,7 +199,7 @@ createMockedFunction(contractAddress, 'gravatarToOwner', 'gravatarToOwner(uint25 let gravity = Gravity.bind(contractAddress) let result = gravity.gravatarToOwner(bigIntParam) -assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum.Value.fromAddress(result)) +assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum. Value.fromAddress(result)) ``` As demonstrated, in order to mock a contract call and hardcore a return value, the user must provide a contract address, function name, function signature, an array of arguments, and of course - the return value. @@ -231,7 +231,7 @@ Running the assert.fieldEquals() function will check for equality of the given f ### Interacting with Event metadata -Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: +Users can use default transaction metadata, which could be returned as an ethereum. Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: ```typescript // Read @@ -245,7 +245,7 @@ newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) ### Asserting variable equality ```typescript -assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); +assert.equals(ethereum.Value.fromString("hello"); ethereum. Value.fromString("hello")); ``` ### Asserting that an Entity is **not** in the store From 24db59c149d6f7787b200e8a42ae92678fa8b464 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:30:04 -0500 Subject: [PATCH 079/432] New translations deploy-subgraph-hosted.mdx (Korean) --- pages/ko/hosted-service/deploy-subgraph-hosted.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/hosted-service/deploy-subgraph-hosted.mdx b/pages/ko/hosted-service/deploy-subgraph-hosted.mdx index bdc532e205e4..162beff05797 100644 --- a/pages/ko/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/ko/hosted-service/deploy-subgraph-hosted.mdx @@ -6,7 +6,7 @@ If you have not checked out already, check out how to write the files that make ## Create a Hosted Service account -Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. +Before using the Hosted Service, create an account in our Hosted Service. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. ## Store the Access Token From 797f12eed45307d454961f95290db800302f1bda Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:30:05 -0500 Subject: [PATCH 080/432] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- .../hosted-service/deploy-subgraph-hosted.mdx | 44 ++++++++++++------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index bdc532e205e4..7ebbcd1eed72 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -2,25 +2,25 @@ title: Deploy a Subgraph to the Hosted Service --- -If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. +If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. ## Create a Hosted Service account -Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. +Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. ## Store the Access Token -After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. +After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. ## Create a Subgraph on the Hosted Service -Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: +Before deploying the subgraph, you need to create it in The Graph Explorer. Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: **Image** - Select an image to be used as a preview image and thumbnail for the subgraph. -**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ +**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ _This field cannot be changed later._ -**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ +**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ **Subtitle** - Text that will appear in subgraph cards. @@ -30,7 +30,7 @@ Before deploying the subgraph, you need to create it in The Graph Explorer. Go t **Hide** - Switching this on hides the subgraph in the Graph Explorer. -After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). +After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). ## Deploy a Subgraph on the Hosted Service @@ -38,25 +38,26 @@ Deploying your subgraph will upload the subgraph files that you've built with `y You deploy the subgraph by running `yarn deploy` -After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. +After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. ## Redeploying a Subgraph -When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. +When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. -If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. +If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. This ensures that you have a subgraph to work with while the new version is syncing. ### Deploying the subgraph to multiple Ethereum networks -In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: You could then define two config files providing the addresses for each network: ```json { "network": "mainnet", "address": "0x123..." } +} ``` and @@ -66,12 +67,14 @@ and "network": "ropsten", "address": "0xabc..." } +} ``` Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: ```yaml # ... +# ... dataSources: - kind: ethereum/contract name: Gravity @@ -90,6 +93,10 @@ In order generate a manifest to either network, you could add two additional com ```json { ... + "scripts": { + ... + { + ... "scripts": { ... "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", @@ -99,6 +106,9 @@ In order generate a manifest to either network, you could add two additional com ... "mustache": "^3.1.0" } +} + "mustache": "^3.1.0" + } } ``` @@ -118,9 +128,9 @@ A working example of this can be found [here](https://github.com/graphprotocol/e ## Checking subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: ```graphql { @@ -147,14 +157,14 @@ Graph Node exposes a graphql endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. ## Subgraph archive policy -The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. +The Hosted Service is a free Graph Node indexer. The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs which are inactive. **A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 30 days.** -Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. +Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. From ed386aaf66328d066b1c14f1ad2c98bdaf56075c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:30:07 -0500 Subject: [PATCH 081/432] New translations migrating-subgraph.mdx (Spanish) --- pages/es/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/hosted-service/migrating-subgraph.mdx b/pages/es/hosted-service/migrating-subgraph.mdx index eda54d1931ed..44d02384cd0c 100644 --- a/pages/es/hosted-service/migrating-subgraph.mdx +++ b/pages/es/hosted-service/migrating-subgraph.mdx @@ -133,7 +133,7 @@ Remember that it's a dynamic and growing market, but how you interact with it is ## Additional Resources -If you're still confused, fear not! Check out the following resources or watch our video guide on migrating subgraphs to the decentralized network below: +If you're still confused, fear not! If you're still confused, fear not! Check out the following resources or watch our video guide on migrating subgraphs to the decentralized network below:
-Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. +Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. -You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! +You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! -Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. +Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. -**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! +**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! This is usually because the subgraph has bugs - the logs will tell you where those issues exist! ## Versioning your Subgraph with the CLI -Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. +Developers might want to update their subgraph, for a variety of reasons. Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. -Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. +Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. Please note that there are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, developers must also fund a part of the curation tax on auto-migrating signal. You cannot publish a new version of your subgraph if curators have not signaled on it. For more information on the risks of curation, please read more [here](/curating). ### Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. +Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) ## Managing your API Keys -Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. +Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. The Studio will list out existing API keys, which will give you the ability to manage or delete them. @@ -110,13 +110,13 @@ The Studio will list out existing API keys, which will give you the ability to m - View the current usage of the API key with stats: - Number of queries - Amount of GRT spent -2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: +2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: In this section, you can: - View and manage the domain names authorized to use your API key - Assign subgraphs that can be queried with your API key ## How to Manage your Subgraph -API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. +API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. You can organize your subgraphs by their **status** and **category**. - The **Status** tag allows you to pick between a variety of tags including ``, ``, ``, ``, etc. -- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. +- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. Options include ``, ``, ``, etc. From e041224c32a9be1d3e9d5e6e2d941bec6de72df5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 01:31:09 -0500 Subject: [PATCH 108/432] New translations indexing.mdx (Chinese Simplified) --- pages/zh/indexing.mdx | 67 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 64 insertions(+), 3 deletions(-) diff --git a/pages/zh/indexing.mdx b/pages/zh/indexing.mdx index f4c2f6f49ef4..62f122923dda 100644 --- a/pages/zh/indexing.mdx +++ b/pages/zh/indexing.mdx @@ -129,13 +129,13 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute ## 基础设施 -At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. +At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. - **PostgreSQL 数据库** - Graph 节点的主要存储,这是存储子图数据的地方。 索引人服务和代理也使用数据库来存储状态通道数据、成本模型和索引规则。 - **Ethereum endpoint** -公开 Ethereum JSON-RPC API 的端点。 这可能采取单个 Ethereum 客户端的形式,也可能是一个更复杂的设置,在多个客户端之间进行负载平衡。 需要注意的是,某些子图将需要特定的 Ethereum 客户端功能,如存档模式和跟踪 API。 -- ** IPFS 节点(版本小于 5)** - 子图部署元数据存储在 IPFS 网络上。 The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- ** IPFS 节点(版本小于 5)** - 子图部署元数据存储在 IPFS 网络上。 The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **索引人服务** -处理与网络的所有必要的外部通信。 共享成本模型和索引状态,将来自网关的查询请求传递给一个 Graph 节点,并通过状态通道与网关管理查询支付。 @@ -373,7 +373,7 @@ docker-compose up #### 开始 -索引人代理和索引人服务应该与你的 Graph 节点基础架构共同定位。 有很多方法可以为你的索引人组件设置虚拟执行环境,这里我们将解释如何使用 NPM 包或源码在裸机上运行它们,或者通过谷歌云 Kubernetes 引擎上的 kubernetes 和 docker 运行。 If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! +索引人代理和索引人服务应该与你的 Graph 节点基础架构共同定位。 有很多方法可以为你的索引人组件设置虚拟执行环境,这里我们将解释如何使用 NPM 包或源码在裸机上运行它们,或者通过谷歌云 Kubernetes 引擎上的 kubernetes 和 docker 运行。 If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! #### 来自 NPM 包 @@ -409,6 +409,15 @@ graph indexer ... # Indexer agent graph-indexer-agent start ... +# Indexer CLI +#Forward the port of your agent pod if using Kubernetes +kubectl port-forward pod/POD_ID 18000:8000 +graph indexer connect http://localhost:18000/ +graph indexer ... + +# Indexer agent +graph-indexer-agent start ... + # Indexer CLI #Forward the port of your agent pod if using Kubernetes kubectl port-forward pod/POD_ID 18000:8000 @@ -470,6 +479,55 @@ cd packages/indexer-cli cd packages/indexer-agent ./bin/graph-indexer-service start ... +# Indexer CLI +cd packages/indexer-cli +./bin/graph-indexer-cli indexer connect http://localhost:18000/ +./bin/graph-indexer-cli indexer ... + +# Indexer agent +cd packages/indexer-agent +./bin/graph-indexer-service start ... + +# From Repo root directory +yarn + +# Indexer Service +cd packages/indexer-service +./bin/graph-indexer-service start ... + +# Indexer agent +cd packages/indexer-agent +./bin/graph-indexer-service start ... + +# Indexer CLI +cd packages/indexer-cli +./bin/graph-indexer-cli indexer connect http://localhost:18000/ +./bin/graph-indexer-cli indexer ... + +# Indexer agent +cd packages/indexer-agent +./bin/graph-indexer-service start ... + +# From Repo root directory +yarn + +# Indexer Service +cd packages/indexer-service +./bin/graph-indexer-service start ... + +# Indexer agent +cd packages/indexer-agent +./bin/graph-indexer-service start ... + +# Indexer CLI +cd packages/indexer-cli +./bin/graph-indexer-cli indexer connect http://localhost:18000/ +./bin/graph-indexer-cli indexer ... + +# Indexer agent +cd packages/indexer-agent +./bin/graph-indexer-service start ... + # Indexer CLI cd packages/indexer-cli ./bin/graph-indexer-cli indexer connect http://localhost:18000/ @@ -508,6 +566,7 @@ docker run -p 18000:8000 -it indexer-agent:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... +docker run -p 18000:8000 -it indexer-agent:latest ... ``` 请参阅 [在 Google Cloud 上使用 Terraform 设置服务器基础架构](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) 一节。 @@ -655,6 +714,8 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE # This default will match any GraphQL expression. # It uses a Global substituted into the expression to calculate cost default => 0.1 * $SYSTEM_LOAD; +# It uses a Global substituted into the expression to calculate cost +default => 0.1 * $SYSTEM_LOAD; ``` 成本模型示例: From facccd2ca36a392340cce843b83cd55c9b7d8f9e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 12 Jan 2022 10:03:09 -0500 Subject: [PATCH 109/432] New translations near.mdx (Chinese Simplified) --- pages/zh/supported-networks/near.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/supported-networks/near.mdx b/pages/zh/supported-networks/near.mdx index cc42c2e2f4f9..3bae5584af00 100644 --- a/pages/zh/supported-networks/near.mdx +++ b/pages/zh/supported-networks/near.mdx @@ -238,7 +238,7 @@ No, a subgraph can only support data sources from one chain / network. ### Can subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. ### Will receipt handlers trigger for accounts and their sub accounts? From 89734cad847e0a0d8e73449e6086a5c8f7557cee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 00:48:46 -0500 Subject: [PATCH 110/432] New translations developer-faq.mdx (Chinese Simplified) --- pages/zh/developer/developer-faq.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/developer/developer-faq.mdx b/pages/zh/developer/developer-faq.mdx index 58380c271633..7c7a76b40daa 100644 --- a/pages/zh/developer/developer-faq.mdx +++ b/pages/zh/developer/developer-faq.mdx @@ -10,7 +10,7 @@ It is not possible to delete subgraphs once they are created. No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. -### 3. 3. Can I change the GitHub account associated with my subgraph? +### 3. Can I change the GitHub account associated with my subgraph? No. No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. Make sure to think of this carefully before you create your subgraph. From ef11808f321cd9aea0ad30c0c46a6a3db501a767 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 00:48:47 -0500 Subject: [PATCH 111/432] New translations deploy-subgraph-studio.mdx (Chinese Simplified) --- pages/zh/studio/deploy-subgraph-studio.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/zh/studio/deploy-subgraph-studio.mdx b/pages/zh/studio/deploy-subgraph-studio.mdx index c2d4321ea285..d387d10aadad 100644 --- a/pages/zh/studio/deploy-subgraph-studio.mdx +++ b/pages/zh/studio/deploy-subgraph-studio.mdx @@ -2,7 +2,7 @@ title: Deploy a Subgraph to the Subgraph Studio --- -Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: This will take you through the steps to: +Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: - Install The Graph CLI (with both yarn and npm) - Create your Subgraph in the Subgraph Studio @@ -11,7 +11,7 @@ Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you ## Installing Graph CLI -We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. Here are the commands to install graph-cli. This can be done using npm or yarn. +We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. **Install with yarn:** @@ -27,7 +27,7 @@ npm install -g @graphprotocol/graph-cli ## Create your Subgraph in Subgraph Studio -Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. +Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. ## 2. Initialize your Subgraph @@ -41,7 +41,7 @@ The `` value can be found on your subgraph details page in Subgra ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network and abi that you want to query. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network and abi that you want to query. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. ## Graph Auth From 801d681b971f99c2c8a498a2345bbac7ac23cd63 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 00:48:48 -0500 Subject: [PATCH 112/432] New translations multisig.mdx (Chinese Simplified) --- pages/zh/studio/multisig.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/studio/multisig.mdx b/pages/zh/studio/multisig.mdx index 74e54a901d98..164835bdb8a4 100644 --- a/pages/zh/studio/multisig.mdx +++ b/pages/zh/studio/multisig.mdx @@ -2,11 +2,11 @@ title: Using a Multisig Wallet --- -Subgraph Studio currently doesn't support signing with multisig wallets. Until then, you can follow this guide on how to publish your subgraph by invoking the [GNS contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol) functions. Until then, you can follow this guide on how to publish your subgraph by invoking the [GNS contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol) functions. +Subgraph Studio currently doesn't support signing with multisig wallets. Until then, you can follow this guide on how to publish your subgraph by invoking the [GNS contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol) functions. ### Create a Subgraph -Similary to using a regular wallet, you can create a subgraph by connecting your non-multisig wallet in Subgraph Studio. Once you connect the wallet, simply create a new subgraph. Make sure you fill out all the details, such as subgraph name, description, image, website, and source code url if applicable. Once you connect the wallet, simply create a new subgraph. Make sure you fill out all the details, such as subgraph name, description, image, website, and source code url if applicable. +Similary to using a regular wallet, you can create a subgraph by connecting your non-multisig wallet in Subgraph Studio. Once you connect the wallet, simply create a new subgraph. Make sure you fill out all the details, such as subgraph name, description, image, website, and source code url if applicable. For initializing a starter subgraph, you can follow the commands shown in the UI, or simply run @@ -14,7 +14,7 @@ For initializing a starter subgraph, you can follow the commands shown in the UI graph init --studio ``` -`SUBGRAPH_SLUG` is the name of your subgraph that you can copy from the UI, or from the URL in the browser. This command should create a folder in your file system with all the necessary files to start developing a subgraph. This command should create a folder in your file system with all the necessary files to start developing a subgraph. +`SUBGRAPH_SLUG` is the name of your subgraph that you can copy from the UI, or from the URL in the browser. This command should create a folder in your file system with all the necessary files to start developing a subgraph. ### Deploy a Subgraph @@ -32,9 +32,9 @@ You can either publish a new subgraph to the decentralized network or publish a #### Publish a New Subgraph -There are a couple of ways to publish a subgraph using multisig wallets. There are a couple of ways to publish a subgraph using multisig wallets. Here we'll describe invoking the [`publishNewSubgraph`](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol#L231) function in the [GNS contract](https://etherscan.io/address/0xaDcA0dd4729c8BA3aCf3E99F3A9f471EF37b6825) using Etherscan. +There are a couple of ways to publish a subgraph using multisig wallets. Here we'll describe invoking the [`publishNewSubgraph`](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol#L231) function in the [GNS contract](https://etherscan.io/address/0xaDcA0dd4729c8BA3aCf3E99F3A9f471EF37b6825) using Etherscan. -Before we use that function, we need to generate input arguments for it. Before we use that function, we need to generate input arguments for it. Access [this page](https://thegraph.com/studio/multisig) in Subgraph Studio and provide the following: +Before we use that function, we need to generate input arguments for it. Access [this page](https://thegraph.com/studio/multisig) in Subgraph Studio and provide the following: - Ethereum address of your multisig wallet - Subgraph that you want to publish @@ -46,7 +46,7 @@ There should be 4 arguments: - `graphAccount`: which is your multisig account address - `subgraphDeploymentID`: the hex hash of the deployment ID for that subgraph -- `versionMetadata`: version metadata (label and description) that gets uploaded to IPFS. The hex hash value for that JSON file will be provided. The hex hash value for that JSON file will be provided. +- `versionMetadata`: version metadata (label and description) that gets uploaded to IPFS. The hex hash value for that JSON file will be provided. - `subgraphMetadata`: simlar to version metadata, subgraph metadata (name, image, description, website and source code url) gets uploaded to IPFS, and we provide the hex hash value for that JSON file With those 4 arguments, you should be able to: @@ -57,7 +57,7 @@ With those 4 arguments, you should be able to: #### Publish a New Version -To publish a new version of an existing subgraph we first need to generate input arguments for it. Access [this page](https://thegraph.com/studio/multisig) in Subgraph Studio and provide: Access [this page](https://thegraph.com/studio/multisig) in Subgraph Studio and provide: +To publish a new version of an existing subgraph we first need to generate input arguments for it. Access [this page](https://thegraph.com/studio/multisig) in Subgraph Studio and provide: - Ethereum address of your multisig wallet - Subgraph that you want to publish @@ -69,11 +69,11 @@ After clicking on "Get Arguments" we'll generate all the contract arguments for On the right side of the UI under the `Publish New Version` title, there should be 4 arguments: - `graphAccount`: which is your Multisig account address -- `subgraphNumber`: is the number of your already published subgraph. `subgraphNumber`: is the number of your already published subgraph. It is a part of the subgraph id for a published subgraph queried through The Graph Network subgraph. +- `subgraphNumber`: is the number of your already published subgraph. It is a part of the subgraph id for a published subgraph queried through The Graph Network subgraph. - `subgraphDeploymentID`: which is the hex hash of the deployment ID for that subgraph - `versionMetadata`: version metadata (label and description) gets uploaded to IPFS, and we provide the hex hash value for that JSON file -Now that we generated all the arguments you are ready to proceed and call the `publishNewVersion` method. In order to do so, you should: In order to do so, you should: +Now that we generated all the arguments you are ready to proceed and call the `publishNewVersion` method. In order to do so, you should: - Visit [the GraphProxy](https://etherscan.io/address/0xaDcA0dd4729c8BA3aCf3E99F3A9f471EF37b6825#writeProxyContract) contract on Etherscan - Connect to Etherscan using WalletConnect via the WalletConnect Safe app of your Multisig From 890e2407a86946838f8f133e35d9c560947cc384 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 00:48:49 -0500 Subject: [PATCH 113/432] New translations studio-faq.mdx (Chinese Simplified) --- pages/zh/studio/studio-faq.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/zh/studio/studio-faq.mdx b/pages/zh/studio/studio-faq.mdx index 96028fbe7f3e..4db4d7ccddaa 100644 --- a/pages/zh/studio/studio-faq.mdx +++ b/pages/zh/studio/studio-faq.mdx @@ -2,20 +2,20 @@ title: Subgraph Studio FAQs --- -### 1. 1. How do I create an API Key? +### 1. How do I create an API Key? In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. -### 2. 2. Can I create multiple API Keys? +### 2. Can I create multiple API Keys? -A: Yes! A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). Check out the link [here](https://thegraph.com/studio/apikeys/). +A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). -### 3. 3. How do I restrict a domain for an API Key? +### 3. How do I restrict a domain for an API Key? After creating an API Key, in the Security section you can define the domains that can query a specific API Key. -### 4. 4. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +### 4. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in the Subgraph Studio. +You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in the Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. From cecf0c8428ea838d395f99b0855b3bbb9b66a700 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 00:48:50 -0500 Subject: [PATCH 114/432] New translations subgraph-studio.mdx (Chinese Simplified) --- pages/zh/studio/subgraph-studio.mdx | 38 ++++++++++++++--------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/pages/zh/studio/subgraph-studio.mdx b/pages/zh/studio/subgraph-studio.mdx index d718adb99356..9af3926db3df 100644 --- a/pages/zh/studio/subgraph-studio.mdx +++ b/pages/zh/studio/subgraph-studio.mdx @@ -15,9 +15,9 @@ What you can do in the Subgraph Studio: - Integrate it in staging using the query URL - Create and manage your API keys for specific subgraphs -Here in the Subgraph Studio, you have full control over your subgraphs. Here in the Subgraph Studio, you have full control over your subgraphs. Not only can you test your subgraphs before you publish them, but you can also restrict your API keys to specific domains and only allow certain indexers to query from their API keys. +Here in the Subgraph Studio, you have full control over your subgraphs. Not only can you test your subgraphs before you publish them, but you can also restrict your API keys to specific domains and only allow certain indexers to query from their API keys. -Querying subgraphs generates query fees, used to reward [indexers](/indexing) on the Graph network. If you’re a dapp developer or subgraph developer, the Studio will empower you to build better subgraphs to power your or your community’s queries. The Studio is comprised of 5 main parts: If you’re a dapp developer or subgraph developer, the Studio will empower you to build better subgraphs to power your or your community’s queries. The Studio is comprised of 5 main parts: +Querying subgraphs generates query fees, used to reward [indexers](/indexing) on the Graph network. If you’re a dapp developer or subgraph developer, the Studio will empower you to build better subgraphs to power your or your community’s queries. The Studio is comprised of 5 main parts: - Your user account controls - A list of subgraphs that you’ve created @@ -28,11 +28,11 @@ Querying subgraphs generates query fees, used to reward [indexers](/indexing) on ## How to Create Your Account 1. Sign in with your wallet - you can do this via MetaMask or WalletConnect -1. Once you sign in, you will see your unique deploy key in your account home page. This will allow you to either publish your subgraphs or manage your API keys + billing. You will have a unique deploy key that can be re-generated if you think it has been compromised. This will allow you to either publish your subgraphs or manage your API keys + billing. You will have a unique deploy key that can be re-generated if you think it has been compromised. +1. Once you sign in, you will see your unique deploy key in your account home page. This will allow you to either publish your subgraphs or manage your API keys + billing. You will have a unique deploy key that can be re-generated if you think it has been compromised. ## How to Create your Subgraph in Subgraph Studio -The best part! The best part! When you first create a subgraph, you’ll be directed to fill out: +The best part! When you first create a subgraph, you’ll be directed to fill out: - Your Subgraph Name - Image @@ -42,7 +42,7 @@ The best part! The best part! When you first create a subgraph, you’ll be dire ## Subgraph Compatibility with The Graph Network -The Graph Network is not yet able to support all of the data-sources & features available on the Hosted Service. In order to be supported by indexers on the network, subgraphs must: In order to be supported by indexers on the network, subgraphs must: +The Graph Network is not yet able to support all of the data-sources & features available on the Hosted Service. In order to be supported by indexers on the network, subgraphs must: - Index mainnet Ethereum - Must not use any of the following features: @@ -56,15 +56,15 @@ More features & networks will be added to The Graph Network incrementally. ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -After you have created your subgraph, you will be able to deploy it using the [CLI](https://github.com/graphprotocol/graph-cli), or command line interface. After you have created your subgraph, you will be able to deploy it using the [CLI](https://github.com/graphprotocol/graph-cli), or command line interface. Deploying a subgraph with the CLI will push the subgraph to the Studio where you’ll be able to test subgraphs using the playground. This will eventually allow you to publish to the Graph Network. For more information on CLI setup, [check this out](/developer/define-subgraph-hosted#install-the-graph-cli) (pst, make sure you have your deploy key on hand). Remember, deploying is **not the same as** publishing. When you deploy a subgraph, you just push it to the Studio where you’re able to test it. Versus, when you publish a subgraph, you are publishing it on-chain. This will eventually allow you to publish to the Graph Network. For more information on CLI setup, [check this out](/developer/define-subgraph-hosted#install-the-graph-cli) (pst, make sure you have your deploy key on hand). Remember, deploying is **not the same as** publishing. When you deploy a subgraph, you just push it to the Studio where you’re able to test it. Versus, when you publish a subgraph, you are publishing it on-chain. +After you have created your subgraph, you will be able to deploy it using the [CLI](https://github.com/graphprotocol/graph-cli), or command line interface. Deploying a subgraph with the CLI will push the subgraph to the Studio where you’ll be able to test subgraphs using the playground. This will eventually allow you to publish to the Graph Network. For more information on CLI setup, [check this out](/developer/define-subgraph-hosted#install-the-graph-cli) (pst, make sure you have your deploy key on hand). Remember, deploying is **not the same as** publishing. When you deploy a subgraph, you just push it to the Studio where you’re able to test it. Versus, when you publish a subgraph, you are publishing it on-chain. ## Testing your Subgraph in Subgraph Studio -If you’d like to test your subgraph before publishing it to the network, you can do this in the Subgraph **Playground** or look at your logs. The Subgraph logs will tell you **where** your subgraph fails in the case that it does. The Subgraph logs will tell you **where** your subgraph fails in the case that it does. +If you’d like to test your subgraph before publishing it to the network, you can do this in the Subgraph **Playground** or look at your logs. The Subgraph logs will tell you **where** your subgraph fails in the case that it does. ## Publish your Subgraph in Subgraph Studio -You’ve made it this far - congrats! You’ve made it this far - congrats! Publishing your subgraph means that an IPFS hash was generated when you deployed the subgraph within the CLI and is stored in the network’s Ethereum smart contracts. In order to publish your subgraph successfully, you’ll need to go through the following steps outlined in this [blog](https://thegraph.com/blog/building-with-subgraph-studio). Check out the video overview below as well: In order to publish your subgraph successfully, you’ll need to go through the following steps outlined in this [blog](https://thegraph.com/blog/building-with-subgraph-studio). Check out the video overview below as well: +You’ve made it this far - congrats! Publishing your subgraph means that an IPFS hash was generated when you deployed the subgraph within the CLI and is stored in the network’s Ethereum smart contracts. In order to publish your subgraph successfully, you’ll need to go through the following steps outlined in this [blog](https://thegraph.com/blog/building-with-subgraph-studio). Check out the video overview below as well:
-Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. +Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. -You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! +You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! -Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. +Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. -**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! This is usually because the subgraph has bugs - the logs will tell you where those issues exist! +**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! ## Versioning your Subgraph with the CLI -Developers might want to update their subgraph, for a variety of reasons. Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. +Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. -Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. +Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. Please note that there are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, developers must also fund a part of the curation tax on auto-migrating signal. You cannot publish a new version of your subgraph if curators have not signaled on it. For more information on the risks of curation, please read more [here](/curating). ### Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. +Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) ## Managing your API Keys -Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. +Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. The Studio will list out existing API keys, which will give you the ability to manage or delete them. @@ -110,13 +110,13 @@ The Studio will list out existing API keys, which will give you the ability to m - View the current usage of the API key with stats: - Number of queries - Amount of GRT spent -2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: In this section, you can: +2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: - View and manage the domain names authorized to use your API key - Assign subgraphs that can be queried with your API key ## How to Manage your Subgraph -API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. You can organize your subgraphs by their **status** and **category**. +API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. - The **Status** tag allows you to pick between a variety of tags including ``, ``, ``, ``, etc. -- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. Options include ``, ``, ``, etc. +- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. From e5a562c5f47d683c56b823588d6a41d97ff5be8d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 00:48:51 -0500 Subject: [PATCH 115/432] New translations near.mdx (Chinese Simplified) --- pages/zh/supported-networks/near.mdx | 48 ++++++++++++++-------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/zh/supported-networks/near.mdx b/pages/zh/supported-networks/near.mdx index 3bae5584af00..bff2f82364d9 100644 --- a/pages/zh/supported-networks/near.mdx +++ b/pages/zh/supported-networks/near.mdx @@ -8,20 +8,20 @@ This guide is an introduction to building subgraphs indexing smart contracts on ## What is NEAR? -[NEAR](https://near.org/) is a smart contract platform for building decentralised applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. +[NEAR](https://near.org/) is a smart contract platform for building decentralised applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. ## What are NEAR subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process on-chain events. Subgraphs are event-based, which means that they listen for and then process on-chain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process on-chain events. There are currently two types of handlers supported for NEAR subgraphs: - Block handlers: these are run on every new block - Receipt handlers: run every time a message is executed at a specified account [From the NEAR documentation](https://docs.near.org/docs/concepts/transaction#receipt): -> A Receipt is the only actionable object in the system. A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. ## Building a NEAR Subgraph @@ -35,11 +35,11 @@ NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `grap There are three aspects of subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. NEAR is a new `kind` of data source. +**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developer/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developer/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. During subgraph development there are two key commands: @@ -50,7 +50,7 @@ $ graph build # generates Web Assembly from the AssemblyScript files, and prepar ### Subgraph Manifest Definition -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:: See below for an example subgraph manifest for a NEAR subgraph:: +The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:: ```yaml specVersion: 0.0.2 @@ -73,17 +73,17 @@ dataSources: ``` - NEAR subgraphs introduce a new `kind` of data source (`near`) -- The `network` should correspond to a network on the hosting Graph Node. The `network` should correspond to a network on the hosting Graph Node. On the Hosted Service, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human readable ID corresponding to a [NEAR account](https://docs.near.org/docs/concepts/account). This can be an account, or a sub account. This can be an account, or a sub account. +- The `network` should correspond to a network on the hosting Graph Node. On the Hosted Service, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human readable ID corresponding to a [NEAR account](https://docs.near.org/docs/concepts/account). This can be an account, or a sub account. NEAR data sources support two types of handlers: -- `blockHandlers`: run on every new NEAR block. No `source.account` is required. No `source.account` is required. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). ### Schema Definition -Schema definition describes the structure of the resulting subgraph database, and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). +Schema definition describes the structure of the resulting subgraph database, and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). ### AssemblyScript Mappings @@ -158,11 +158,11 @@ These types are passed to block & receipt handlers: Otherwise the rest of the [AssemblyScript API](/developer/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developer/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developer/assemblyscript-api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). The Graph's Hosted Service currently supports indexing NEAR mainnet and testnet in beta, with the following network names: @@ -171,7 +171,7 @@ The Graph's Hosted Service currently supports indexing NEAR mainnet and testnet More information on creating and deploying subgraphs on the Hosted Service can be found [here](/hosted-service/deploy-subgraph-hosted). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On the Hosted Service, this can be done from [your Dashboard](https://thegraph.com/hosted-service/dashboard): "Add Subgraph". +As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On the Hosted Service, this can be done from [your Dashboard](https://thegraph.com/hosted-service/dashboard): "Add Subgraph". Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: @@ -194,7 +194,7 @@ graph deploy --node https://api.thegraph.com/deploy/ --ipfs https://api.thegraph graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: You can check its progress by querying the subgraph itself: +Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: ``` { @@ -216,7 +216,7 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. +The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. ## Example Subgraphs @@ -230,7 +230,7 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! ### Can a subgraph index both NEAR and EVM chains? @@ -242,23 +242,23 @@ Currently, only Block and Receipt triggers are supported. We are investigating t ### Will receipt handlers trigger for accounts and their sub accounts? -Receipt handlers will only be triggered for the exact-match of the named account. More flexibility may be added in future. More flexibility may be added in future. +Receipt handlers will only be triggered for the exact-match of the named account. More flexibility may be added in future. ### Can NEAR subgraphs make view calls to NEAR accounts during mappings? -This is not supported. This is not supported. We are evaluating whether this functionality is required for indexing. +This is not supported. We are evaluating whether this functionality is required for indexing. ### Can I use data source templates in my NEAR subgraph? -This is not currently supported. This is not supported. We are evaluating whether this functionality is required for indexing. +This is not currently supported. We are evaluating whether this functionality is required for indexing. ### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? -Pending functionality is not yet supported for NEAR subgraphs. Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. ### My question hasn't been answered, where can I get more help building NEAR subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/developer/quick-start). Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. +If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/developer/quick-start). Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. ## References From 24ef4f39f40aa9b61ad1c143d1323b36ab908bed Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:12 -0500 Subject: [PATCH 116/432] New translations define-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/define-subgraph-hosted.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/developer/define-subgraph-hosted.mdx b/pages/zh/developer/define-subgraph-hosted.mdx index 6006b117aa62..92bf5bd8cd2f 100644 --- a/pages/zh/developer/define-subgraph-hosted.mdx +++ b/pages/zh/developer/define-subgraph-hosted.mdx @@ -2,7 +2,7 @@ title: Define a Subgraph --- -A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. Once deployed, it will form a part of a global graph of blockchain data. +A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. ![Define a Subgraph](/img/define-subgraph.png) From 165a2600a8dd45bd16c4efd983803c3b88d73cfa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:16 -0500 Subject: [PATCH 117/432] New translations deprecating-a-subgraph.mdx (Chinese Simplified) --- pages/zh/developer/deprecating-a-subgraph.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/zh/developer/deprecating-a-subgraph.mdx b/pages/zh/developer/deprecating-a-subgraph.mdx index 726461b4c46c..f8966e025c13 100644 --- a/pages/zh/developer/deprecating-a-subgraph.mdx +++ b/pages/zh/developer/deprecating-a-subgraph.mdx @@ -2,13 +2,13 @@ title: Deprecating a Subgraph --- -So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: You've come to the right place! Follow the steps below: +So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: 1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) 2. Call 'deprecateSubgraph' with your own address as the first parameter 3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. -4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` -5. Voila! Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: Please note the following: +4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` +5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: - Curators will not be able to signal on the subgraph anymore - Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price From 74a9f9f89603dfef80ddae52db0af660f52cd36d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:21 -0500 Subject: [PATCH 118/432] New translations developer-faq.mdx (Chinese Simplified) --- pages/zh/developer/developer-faq.mdx | 86 ++++++++++++++-------------- 1 file changed, 43 insertions(+), 43 deletions(-) diff --git a/pages/zh/developer/developer-faq.mdx b/pages/zh/developer/developer-faq.mdx index 7c7a76b40daa..41449c60e5ab 100644 --- a/pages/zh/developer/developer-faq.mdx +++ b/pages/zh/developer/developer-faq.mdx @@ -2,35 +2,35 @@ title: Developer FAQs --- -### 1. 1. Can I delete my subgraph? +### 1. Can I delete my subgraph? It is not possible to delete subgraphs once they are created. -### 2. 2. Can I change my subgraph name? +### 2. Can I change my subgraph name? No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. ### 3. Can I change the GitHub account associated with my subgraph? -No. No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. Make sure to think of this carefully before you create your subgraph. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. -### 4. 4. Am I still able to create a subgraph if my smart contracts don't have events? +### 4. Am I still able to create a subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. Although this is not recommended as performance will be significantly slower. +If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. -### 5. 5. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. Is it possible to deploy one subgraph with the same name for multiple networks? -You will need separate names for multiple networks. You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. 6. How are templates different from data sources? +### 6. How are templates different from data sources? -Templates allow you to create data sources on the fly, while your subgraph is indexing. Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). -### 7. 7. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? You can run the following command: @@ -40,31 +40,31 @@ docker pull graphprotocol/graph-node:latest **NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. -### 8. 8. How do I call a contract function or access a public state variable from my subgraph mappings? +### 8. How do I call a contract function or access a public state variable from my subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). -### 9. 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? -Unfortunately this is currently not possible. Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. +Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. -### 10. 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? +### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 12. 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 13. 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? +### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? -Yes. Yes. You can do this by importing `graph-ts` as per the example below: +Yes. You can do this by importing `graph-ts` as per the example below: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,31 +73,31 @@ dataSource.network() dataSource.address() ``` -### 14. 14. Do you support block and call handlers on Rinkeby? +### 14. Do you support block and call handlers on Rinkeby? -On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. Call handlers are not supported for the time being. +On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. -### 15. 15. Can I import ethers.js or other JS libraries into my subgraph mappings? +### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? -Not currently, as mappings are written in AssemblyScript. Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. -### 16. 16. Is it possible to specifying what block to start indexing on? +### 16. Is it possible to specifying what block to start indexing on? -Yes. Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks In most cases we suggest using the block in which the contract was created: Start blocks +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks -### 17. 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. My subgraph is taking a very long time to sync. +### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) -### 18. 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? +### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? -Yes! Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. 19. What networks are supported by The Graph? +### 19. What networks are supported by The Graph? The graph-node supports any EVM-compatible JSON RPC API chain. @@ -135,38 +135,38 @@ In the Hosted Service, the following networks are supported: There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). -### 20. 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 21. 21. Is this possible to use Apollo Federation on top of graph-node? +### 21. Is this possible to use Apollo Federation on top of graph-node? -Federation is not supported yet, although we do want to support it in the future. Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. +Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. -### 22. 22. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? -By default query responses are limited to 100 items per collection. By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: +By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a host name, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -### 24. 24. Where do I go to find my current subgraph on the Hosted Service? +### 24. Where do I go to find my current subgraph on the Hosted Service? -Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) You can find it [here.](https://thegraph.com/hosted-service) +Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) -### 25. 25. Will the Hosted Service start charging query fees? +### 25. Will the Hosted Service start charging query fees? -The Graph will never charge for the Hosted Service. The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. +The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. -### 26. 26. When will the Hosted Service be shut down? +### 26. When will the Hosted Service be shut down? If and when there are plans to do this, the community will be notified well ahead of time with considerations made for any subgraphs built on the Hosted Service. -### 27. 27. How do I upgrade a subgraph on mainnet? +### 27. How do I upgrade a subgraph on mainnet? -If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. From 4de4ed9b1c4f3ebc81690ad7cd2fb56e3dd4b0a2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:26 -0500 Subject: [PATCH 119/432] New translations distributed-systems.mdx (Chinese Simplified) --- pages/zh/developer/distributed-systems.mdx | 54 ++-------------------- 1 file changed, 5 insertions(+), 49 deletions(-) diff --git a/pages/zh/developer/distributed-systems.mdx b/pages/zh/developer/distributed-systems.mdx index ae06b86555f7..894fcbe2e18b 100644 --- a/pages/zh/developer/distributed-systems.mdx +++ b/pages/zh/developer/distributed-systems.mdx @@ -21,17 +21,17 @@ Consider this example of what may occur if a client polls an Indexer for the lat From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. ## Polling for updated data -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: Here is an example: +We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: ```javascript /// Updates the protocol.paused variable to the latest @@ -42,17 +42,6 @@ async function updateProtocolPaused() { // same as leaving out that argument. let minBlock = 0 - for (;;) { - // Schedule a promise that will be ready once - // the next Ethereum block will likely be available. - /// Updates the protocol.paused variable to the latest -/// known value in a loop by fetching it using The Graph. -async function updateProtocolPaused() { - // It's ok to start with minBlock at 0. The query will be served - // using the latest block available. Setting minBlock to 0 is the - // same as leaving out that argument. - let minBlock = 0 - for (;;) { // Schedule a promise that will be ready once // the next Ethereum block will likely be available. @@ -82,17 +71,11 @@ async function updateProtocolPaused() { await nextBlock } } - console.log(response.protocol.paused) - - // Sleep to wait for the next block - await nextBlock - } -} ``` ## Fetching a set of related items -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. @@ -103,14 +86,6 @@ async function getDomainNames() { let pages = 5 const perPage = 1000 - // The first query will get the first page of results and also get the block - // hash so that the remainder of the queries are consistent with the first. - /// Gets a list of domain names from a single block using pagination -async function getDomainNames() { - // Set a cap on the maximum number of items to pull. - let pages = 5 - const perPage = 1000 - // The first query will get the first page of results and also get the block // hash so that the remainder of the queries are consistent with the first. let query = ` @@ -151,25 +126,6 @@ async function getDomainNames() { } } return result -} - while (data.domains.length == perPage && --pages) { - let lastID = data.domains[data.domains.length - 1].id - query = ` - { - domains(first: ${perPage}, where: { id_gt: "${lastID}" }, block: { hash: "${blockHash}" }) { - name - id - } - }` - - data = await graphql(query) - - // Accumulate domain names into the result - for (domain of data.domains) { - result.push(domain.name) - } - } - return result } ``` From b58ba276145ca58345d5bf09396549fc23e5c34f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:35 -0500 Subject: [PATCH 120/432] New translations introduction.mdx (Chinese Simplified) --- pages/zh/about/introduction.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/about/introduction.mdx b/pages/zh/about/introduction.mdx index 5d579bbc364f..5f840c040400 100644 --- a/pages/zh/about/introduction.mdx +++ b/pages/zh/about/introduction.mdx @@ -6,25 +6,25 @@ This page will explain what The Graph is and how you can get started. ## What The Graph Is -The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. It makes it possible to query data that is difficult to query directly. +The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. +To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. **Indexing blockchain data is really, really hard.** Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. -The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). ## How The Graph Works -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. @@ -37,11 +37,11 @@ The flow follows these steps: 1. A decentralized application adds data to Ethereum through a transaction on a smart contract. 2. The smart contract emits one or more events while processing the transaction. 3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. -Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. From d19cb0626d1368c14f873f59ceabb617ea4f6c6b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:39 -0500 Subject: [PATCH 121/432] New translations network.mdx (Chinese Simplified) --- pages/zh/about/network.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/zh/about/network.mdx b/pages/zh/about/network.mdx index 10d1d992fcab..b19f08d12bc7 100644 --- a/pages/zh/about/network.mdx +++ b/pages/zh/about/network.mdx @@ -2,14 +2,14 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. > GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) ## Overview -The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. Consumers use the applications and consume the data. +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. From fc5fa4ce569f3ab014cb40794fd833668cc66734 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:44 -0500 Subject: [PATCH 122/432] New translations create-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/create-subgraph-hosted.mdx | 196 +++++++----------- 1 file changed, 75 insertions(+), 121 deletions(-) diff --git a/pages/zh/developer/create-subgraph-hosted.mdx b/pages/zh/developer/create-subgraph-hosted.mdx index 86b0d3df18d8..6b235e379634 100644 --- a/pages/zh/developer/create-subgraph-hosted.mdx +++ b/pages/zh/developer/create-subgraph-hosted.mdx @@ -2,9 +2,9 @@ title: Create a Subgraph --- -Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. +Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. -The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. +The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. ## Supported Networks @@ -44,7 +44,7 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: - `aurora` - `aurora-testnet` -The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. +The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. Remember that you will **not be able** to publish a subgraph that indexes a non-mainnet network to the decentralized Graph Network in [Subgraph Studio](/studio/subgraph-studio). @@ -65,17 +65,17 @@ The `` is the ID of your subgraph in Subgraph Studio, it can be f ## From An Example Subgraph -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: The following command does this: +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: ``` graph init --studio ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. ## The Subgraph Manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). For the example subgraph, `subgraph.yaml` is: @@ -120,17 +120,17 @@ dataSources: The important entries to update for the manifest are: -- `description`: a human-readable description of what the subgraph is. `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. +- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. This is also displayed by the Graph Explorer. +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. -- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. In most cases we suggest using the block in which the contract was created. +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. -- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. The schema for each entity is defined in the the schema.graphql file. +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. @@ -138,9 +138,9 @@ The important entries to update for the manifest are: - `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. The triggers for a data source within a block are ordered using the following process: @@ -152,21 +152,21 @@ These ordering rules are subject to change. ### Getting The ABIs -The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: There are a few ways to obtain ABI files: +The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. - If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or using solc to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. Make sure you have the right ABI, otherwise running your subgraph will fail. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. ## The GraphQL Schema -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. ## Defining Entities Before defining entities, it is important to take a step back and think about how your data is structured and linked. All queries will be made against the data model defined in the subgraph schema and the entities indexed by the subgraph. Because of this, it is good to define the subgraph schema in a way that matches the needs of your dapp. It may be useful to imagine entities as "objects containing data", rather than as events or functions. -With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. Each type that should be an entity is required to be annotated with an `@entity` directive. +With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. ### Good Example @@ -184,7 +184,7 @@ type Gravatar @entity { ### Bad Example -The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. It is not recommended to map events or function calls to entities 1:1. +The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. ```graphql type GravatarAccepted @entity { @@ -199,29 +199,18 @@ type GravatarDeclined @entity { owner: Bytes displayName: String imageUrl: String -} - type Gravatar @entity { - id: ID! - owner: Bytes - displayName: String - imageUrl: String - accepted: Boolean -} - owner: Bytes - displayName: String - imageUrl: String } ``` ### Optional and Required Fields -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which is of type `ID!` (string). Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. +Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. ### Built-In Scalar Types @@ -229,19 +218,19 @@ Each entity must have an `id` field, which is of type `ID!` (string). Each entit We support the following scalars in our GraphQL API: -| Type | Description | -| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums -You can also create enums within a schema. Enums have the following syntax: Enums have the following syntax: +You can also create enums within a schema. Enums have the following syntax: ```graphql enum TokenStatus { @@ -251,13 +240,13 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). #### Entity Relationships -An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. Relationships are defined on entities just like any other field except that the type specified is that of another entity. @@ -267,8 +256,6 @@ Define a `Transaction` entity type with an optional one-to-one relationship with ```graphql type Transaction @entity { - id: ID! - type Transaction @entity { id: ID! transactionReceipt: TransactionReceipt } @@ -276,8 +263,6 @@ type Transaction @entity { type TransactionReceipt @entity { id: ID! transaction: Transaction -} - transaction: Transaction } ``` @@ -286,8 +271,6 @@ type TransactionReceipt @entity { Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: ```graphql -type Token @entity { - id: ID! type Token @entity { id: ID! } @@ -296,9 +279,6 @@ type TokenBalance @entity { id: ID! amount: Int! token: Token! -} - amount: Int! - token: Token! } ``` @@ -306,7 +286,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -314,8 +294,6 @@ We can make the balances for a token accessible from the token by deriving a `to ```graphql type Token @entity { - id: ID! - tokenBalances: [TokenBalance!]! type Token @entity { id: ID! tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") } @@ -324,19 +302,16 @@ type TokenBalance @entity { id: ID! amount: Int! token: Token! -} - amount: Int! - token: Token! } ``` #### Many-To-Many Relationships -For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. #### Example -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. ```graphql type Organization @entity { @@ -364,17 +339,11 @@ type Organization @entity { type User @entity { id: ID! name: String! - organizations: [UserOrganization!] type Organization @entity { - id: ID! - name: String! - members: [User!]! + organizations: [UserOrganization!] @derivedFrom(field: "organization") } -type User @entity { - id: ID! - name: String! - organizations: [Organization!]! @derivedFrom(field: "members") -} # Set to `${user.id}-${organization.id}` +type UserOrganization @entity { + id: ID! # Set to `${user.id}-${organization.id}` user: User! organization: Organization! } @@ -399,23 +368,21 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { "unique identifier and primary key of the entity" id: ID! address: Bytes! -} - address: Bytes! } ``` ## Defining Fulltext Search Fields -Fulltext search queries filter and rank entities based on a text search input. Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. +Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. -A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. @@ -437,18 +404,10 @@ type Band @entity { labels: [Label!]! discography: [Album!]! members: [Musician!]! -} - name: String! - description: String! - bio: String - wallet: Address - labels: [Label!]! - discography: [Album!]! - members: [Musician!]! } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. ```graphql query { @@ -465,7 +424,7 @@ query { ### Languages supported -Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". Supported language dictionaries: @@ -499,9 +458,9 @@ Supported algorithms for ordering results: ## Writing Mappings -The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. +The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. -For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: @@ -530,19 +489,19 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. ### Recommended IDs for Creating New Entities -Every entity has to have an `id` that is unique among all entities of the same type. Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. +Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. - `event.params.id.toHex()` - `event.transaction.from.toHex()` - `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` -We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. +We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. ## Code Generation @@ -564,7 +523,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with ```javascript import { @@ -576,23 +535,23 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. ## Data Source Templates -A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. ### Data Source for the Main Contract -First, you define a regular data source for the main contract. First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. ```yaml dataSources: @@ -619,13 +578,9 @@ dataSources: ### Data Source Templates for Dynamically Created Contracts -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. ```yaml -dataSources: - - kind: ethereum/contract - name: Factory - # ... other source fields for the main contract ... dataSources: - kind: ethereum/contract name: Factory @@ -659,7 +614,7 @@ templates: ### Instantiating a Data Source Template -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. ```typescript import { Exchange } from '../generated/templates' @@ -677,7 +632,7 @@ export function handleNewExchange(event: NewExchange): void { ### Data Source Context -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: ```typescript import { Exchange } from '../generated/templates' @@ -702,7 +657,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -740,7 +695,7 @@ While events provide an effective way to collect relevant changes to the state o Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. Call handlers currently depend on the Parity tracing API and these networks do not support it. +> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. ### Defining a Call Handler @@ -769,11 +724,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -788,11 +743,11 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. +In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. ### Supported Filters @@ -803,7 +758,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -The absense of a filter for a block handler will ensure that the handler is called every block. The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. +The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. ```yaml dataSources: @@ -832,7 +787,7 @@ dataSources: ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -855,7 +810,7 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. ## Experimental features @@ -885,7 +840,7 @@ Note that using a feature without declaring it will incur in a **validation erro A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. -Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). > **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. @@ -895,7 +850,7 @@ In order to make this easy for subgraph developers, The Graph team wrote a tool ### Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. +Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. > **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. @@ -909,7 +864,7 @@ features: ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -943,25 +898,24 @@ If the subgraph encounters an error that query will return both the data and a g ### Grafting onto Existing Subgraphs -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. -> **Note:** Grafting requires that the Indexer has indexed the base subgraph. **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: ```yaml description: ... -description: ... graft: base: Qm... # Subgraph ID of base subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. -Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: +The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types From 9a0fecce1910385c6d4b868ce4f66253caf27662 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:46 -0500 Subject: [PATCH 123/432] New translations assemblyscript-api.mdx (Chinese Simplified) --- pages/zh/developer/assemblyscript-api.mdx | 90 ++++++++++------------- 1 file changed, 38 insertions(+), 52 deletions(-) diff --git a/pages/zh/developer/assemblyscript-api.mdx b/pages/zh/developer/assemblyscript-api.mdx index a29d1314de5b..a609e6cd657f 100644 --- a/pages/zh/developer/assemblyscript-api.mdx +++ b/pages/zh/developer/assemblyscript-api.mdx @@ -4,16 +4,16 @@ title: AssemblyScript API > Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: Two kinds of APIs are available out of the box: +This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: - the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and - code generated from subgraph files by `graph codegen`. -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. ## Installation -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: All that is required to install these dependencies is to run one of the following commands: +Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: ```sh yarn install # Yarn @@ -41,7 +41,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. The current mapping API version is 0.0.6. +The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. | Version | Release notes | |:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | @@ -68,15 +68,15 @@ import { ByteArray } from '@graphprotocol/graph-ts' _Construction_ - `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. Prefixing with `0x` is optional. +- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. _Type conversions_ - `toHexString(): string` - Converts to a hex string prefixed with `0x`. - `toString(): string` - Interprets the bytes as a UTF-8 string. - `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. Throws in case of overflow. +- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. +- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. _Operators_ @@ -119,7 +119,7 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. The `BigInt` class has the following API: @@ -127,14 +127,14 @@ _Construction_ - `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. - `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. If your input is big-endian, call `.reverse()` first. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. _Type conversions_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. - `x.toString(): string` – turns `BigInt` into a decimal number string. -- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. It's a good idea to first check `x.isI32()`. +- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. - `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. _Math_ @@ -167,7 +167,7 @@ _Math_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). The `TypedMap` class has the following API: @@ -183,7 +183,7 @@ The `TypedMap` class has the following API: import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes` is used to represent arbitrary-length arrays of bytes. `Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. +`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: @@ -211,7 +211,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -241,9 +241,9 @@ export function handleTransfer(event: TransferEvent): void { } ``` -When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. This type allows accessing data such as the event's parent transaction and its parameters. +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -259,16 +259,16 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. See the next section for the two ways of updating existing entities. +> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Updating existing entities There are two ways to update an existing entity: 1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. If the entity already exists, the changes are merged into it. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. Changing properties is straight forward in most cases, thanks to the generated property setters: @@ -277,8 +277,6 @@ let transfer = new Transfer(id) transfer.from = ... transfer.to = ... transfer.amount = ... -transfer.to = ... -transfer.amount = ... ``` It is also possible to unset properties with one of the following two instructions: @@ -288,9 +286,9 @@ transfer.from.unset() transfer.from = null ``` -This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. Two examples would be `owner: Bytes` or `amount: BigInt`. +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. -Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. ```typescript // This won't work @@ -306,13 +304,11 @@ entity.save() #### Removing entities from the store -There is currently no way to remove an entity via the generated types. There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: +There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' ... -import { store } from '@graphprotocol/graph-ts' -... let id = event.transaction.hash.toHex() store.remove('Transfer', id) ``` @@ -323,20 +319,17 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like Given a subgraph schema like +The following example illustrates this. Given a subgraph schema like ```graphql type Transfer @entity { from: Bytes! to: Bytes! amount: BigInt! -} - to: Bytes! - amount: BigInt! } ``` @@ -353,7 +346,7 @@ transfer.save() #### Events and Block/Transaction Data -Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): ```typescript class Event { @@ -399,9 +392,9 @@ class Transaction { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. -A common pattern is to access the contract from which an event originates. This is achieved with the following code: This is achieved with the following code: +A common pattern is to access the contract from which an event originates. This is achieved with the following code: ```typescript // Import the generated contract class @@ -418,13 +411,13 @@ export function handleTransfer(event: Transfer) { } ``` -As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. For public state variables a method with the same name is created automatically. +As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -454,8 +447,6 @@ let tuple = tupleArray as ethereum.Tuple let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! -let decoded = ethereum.decode('(address,uint256)', encoded) - let decoded = ethereum.decode('(address,uint256)', encoded) ``` @@ -471,7 +462,7 @@ For more information: import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -481,7 +472,7 @@ The `log` API includes the following functions: - `log.error(fmt: string, args: Array): void` - logs an error message. - `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. -The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. +The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) @@ -517,7 +508,7 @@ export function handleSomeEvent(event: SomeEvent): void { #### Logging multiple entries from an existing array -Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. +Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. ```typescript let myArray = ['A', 'B', 'C'] @@ -552,9 +543,6 @@ export function handleSomeEvent(event: SomeEvent): void { event.block.hash.toHexString(), // "0x..." event.transaction.hash.toHexString(), // "0x..." ]) -} - event.transaction.hash.toHexString(), // "0x..." - ]) } ``` @@ -564,7 +552,7 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. Given an IPFS hash or path, reading a file from IPFS is done as follows: @@ -581,7 +569,7 @@ let data = ipfs.cat(path) **Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -611,9 +599,9 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. ### Crypto API @@ -621,7 +609,7 @@ On success, `ipfs.map` returns `void`. On success, `ipfs.map` returns `void`. If import { crypto } from '@graphprotocol/graph-ts' ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: Right now, there is only one: +The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: - `crypto.keccak256(input: ByteArray): ByteArray` @@ -638,15 +626,13 @@ JSON data can be parsed using the `json` API: - `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` - `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: ```typescript let value = json.fromBytes(...) -let value = json.fromBytes(...) if (value.kind == JSONValueKind.BOOL) { ... } -} ``` In addition, there is a method to check if the value is `null`: From 31cef5e843e193561556d703d4cdfe9cb94cd2ef Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:51 -0500 Subject: [PATCH 124/432] New translations assemblyscript-migration-guide.mdx (Chinese Simplified) --- .../assemblyscript-migration-guide.mdx | 59 ++++--------------- 1 file changed, 10 insertions(+), 49 deletions(-) diff --git a/pages/zh/developer/assemblyscript-migration-guide.mdx b/pages/zh/developer/assemblyscript-migration-guide.mdx index 592fcdee6d94..2db90a608110 100644 --- a/pages/zh/developer/assemblyscript-migration-guide.mdx +++ b/pages/zh/developer/assemblyscript-migration-guide.mdx @@ -2,11 +2,11 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 That will enable subgraph developers to use newer features of the AS language and standard library. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 > Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. @@ -48,11 +48,6 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ```yaml ... -dataSources: - ... - mapping: - ... - ... dataSources: ... mapping: @@ -106,12 +101,12 @@ if (maybeValue) { Or force it like this: ```typescript -let maybeValue = load()! let maybeValue = load()! // breaks in runtime if value is null +let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. ### Variable Shadowing @@ -140,9 +135,6 @@ By doing the upgrade on your subgraph, sometimes you might get errors like these ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. if (decimals == null) { ~~~~ - in src/mappings/file.ts(41,21) - if (decimals == null) { - ~~~~ in src/mappings/file.ts(41,21) ``` To solve you can simply change the `if` statement to something like this: @@ -261,16 +253,6 @@ let somethingOrElse = something ? something : 'else' let somethingOrElse -if (something) { - somethingOrElse = something -} else { - somethingOrElse = 'else' -} something : 'else' - -// or - -let somethingOrElse - if (something) { somethingOrElse = something } else { @@ -288,7 +270,7 @@ class Container { let container = new Container() container.data = 'data' -let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile container.data : 'else' // doesn't compile +let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile ``` Which outputs this error: @@ -298,9 +280,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - let somethingOrElse: string = container.data ? container.data : "else"; - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: @@ -314,7 +293,7 @@ container.data = 'data' let data = container.data -let somethingOrElse: string = data ? data : 'else' // compiles just fine :) data : 'else' // compiles just fine :) +let somethingOrElse: string = data ? data : 'else' // compiles just fine :) ``` ### Operator overloading with property access @@ -323,10 +302,6 @@ If you try to sum (for example) a nullable type (from a property access) with a ```typescript class BigInt extends Uint8Array { - @operator('+') - plus(other: BigInt): BigInt { - // ... - class BigInt extends Uint8Array { @operator('+') plus(other: BigInt): BigInt { // ... @@ -398,7 +373,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: So you either initialize it first: +You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: ```typescript let total = Total.load('latest') @@ -417,8 +392,6 @@ Or you can just change your GraphQL schema to not use a nullable type for this p type Total @entity { id: ID! amount: BigInt! -} - amount: BigInt! } ``` @@ -472,19 +445,13 @@ export class Something { This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: If you have a schema like this: +Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: ```graphql type Something @entity { id: ID! } -type MyEntity @entity { - id: ID! - invalidField: [Something]! # no longer valid -} -} - type MyEntity @entity { id: ID! invalidField: [Something]! # no longer valid @@ -498,12 +465,6 @@ type Something @entity { id: ID! } -type MyEntity @entity { - id: ID! - invalidField: [Something]! # no longer valid -} -} - type MyEntity @entity { id: ID! invalidField: [Something!]! # valid @@ -517,7 +478,7 @@ This changed because of nullability differences between AssemblyScript versions, - Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) - Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) - Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) - Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) From 5860473e38fca8352df6f21447814ee5c129d244 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:25:57 -0500 Subject: [PATCH 125/432] New translations graphql-api.mdx (Chinese Simplified) --- pages/zh/developer/graphql-api.mdx | 34 +++++++++++++++--------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/pages/zh/developer/graphql-api.mdx b/pages/zh/developer/graphql-api.mdx index d835b27e91b3..f9cb6214fcd9 100644 --- a/pages/zh/developer/graphql-api.mdx +++ b/pages/zh/developer/graphql-api.mdx @@ -6,7 +6,7 @@ This guide explains the GraphQL Query API that is used for the Graph Protocol. ## Queries -In your subgraph schema you define types called `Entities`. In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. #### Examples @@ -36,7 +36,7 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. #### Example @@ -51,11 +51,11 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. +When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. -Further, the `skip` parameter can be used to skip entities and paginate. Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. #### Example @@ -87,7 +87,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: ```graphql { @@ -100,11 +100,11 @@ If a client needs to retrieve a large number of entities, it is much more perfor } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. You can filter on mulltiple values within the `where` parameter. +You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. #### Example @@ -154,13 +154,13 @@ _not_starts_with _not_ends_with ``` -Please note that some suffixes are only supported for specific types. Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. +Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. ### Time-travel queries -You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. @@ -198,9 +198,9 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Several special fulltext operators are available to be used in this `text` search field. +Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: @@ -226,7 +226,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" The following query will return all blogs with variations of "decentralize" followed by "philosophy" +The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" ```graphql { @@ -239,7 +239,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". ```graphql { @@ -256,7 +256,7 @@ Combine fulltext operators to make more complex filters. Combine fulltext operat The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. > **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -264,4 +264,4 @@ GraphQL schemas generally define root types for `queries`, `subscriptions` and ` All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. From b12d4bfecefb5c84e0c116b85ba7837de1f54e08 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:26:01 -0500 Subject: [PATCH 126/432] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- .../hosted-service/deploy-subgraph-hosted.mdx | 44 +++++++------------ 1 file changed, 17 insertions(+), 27 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index 7ebbcd1eed72..bdc532e205e4 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -2,25 +2,25 @@ title: Deploy a Subgraph to the Hosted Service --- -If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. +If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. ## Create a Hosted Service account -Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. +Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. ## Store the Access Token -After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. +After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. ## Create a Subgraph on the Hosted Service -Before deploying the subgraph, you need to create it in The Graph Explorer. Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: +Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: **Image** - Select an image to be used as a preview image and thumbnail for the subgraph. -**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ _This field cannot be changed later._ +**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ -**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ +**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ **Subtitle** - Text that will appear in subgraph cards. @@ -30,7 +30,7 @@ Before deploying the subgraph, you need to create it in The Graph Explorer. Befo **Hide** - Switching this on hides the subgraph in the Graph Explorer. -After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). +After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). ## Deploy a Subgraph on the Hosted Service @@ -38,26 +38,25 @@ Deploying your subgraph will upload the subgraph files that you've built with `y You deploy the subgraph by running `yarn deploy` -After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. +After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. ## Redeploying a Subgraph -When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. +When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. -If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. This ensures that you have a subgraph to work with while the new version is syncing. +If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. ### Deploying the subgraph to multiple Ethereum networks -In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: ```json { "network": "mainnet", "address": "0x123..." } -} ``` and @@ -67,14 +66,12 @@ and "network": "ropsten", "address": "0xabc..." } -} ``` Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: ```yaml # ... -# ... dataSources: - kind: ethereum/contract name: Gravity @@ -93,10 +90,6 @@ In order generate a manifest to either network, you could add two additional com ```json { ... - "scripts": { - ... - { - ... "scripts": { ... "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", @@ -106,9 +99,6 @@ In order generate a manifest to either network, you could add two additional com ... "mustache": "^3.1.0" } -} - "mustache": "^3.1.0" - } } ``` @@ -128,9 +118,9 @@ A working example of this can be found [here](https://github.com/graphprotocol/e ## Checking subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: ```graphql { @@ -157,14 +147,14 @@ Graph Node exposes a graphql endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. ## Subgraph archive policy -The Hosted Service is a free Graph Node indexer. The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. +The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs which are inactive. **A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 30 days.** -Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. +Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. From 42aabc03ff43bee6fa692eaa52f8218f9d31644c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 13 Jan 2022 01:26:06 -0500 Subject: [PATCH 127/432] New translations migrating-subgraph.mdx (Chinese Simplified) --- .../zh/hosted-service/migrating-subgraph.mdx | 34 +++++++++---------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/pages/zh/hosted-service/migrating-subgraph.mdx b/pages/zh/hosted-service/migrating-subgraph.mdx index 451798e5aa85..85f72f053b30 100644 --- a/pages/zh/hosted-service/migrating-subgraph.mdx +++ b/pages/zh/hosted-service/migrating-subgraph.mdx @@ -6,7 +6,7 @@ title: Migrating an Existing Subgraph to The Graph Network This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. -This will tell you everything you need to know about how to migrate to the decentralized network and manage your subgraphs moving forward. The process is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. The process is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. +This will tell you everything you need to know about how to migrate to the decentralized network and manage your subgraphs moving forward. The process is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. ### Migrating An Existing Subgraph to The Graph Network @@ -20,7 +20,7 @@ npm install -g @graphprotocol/graph-cli yarn global add @graphprotocol/graph-cli ``` -2. Create a subgraph on the [Subgraph Studio](https://thegraph.com/studio/). Create a subgraph on the [Subgraph Studio](https://thegraph.com/studio/). Guides on how to do that can be found in the [Subgraph Studio docs](/studio/subgraph-studio) and in [this video tutorial](https://www.youtube.com/watch?v=HfDgC2oNnwo). +2. Create a subgraph on the [Subgraph Studio](https://thegraph.com/studio/). Guides on how to do that can be found in the [Subgraph Studio docs](/studio/subgraph-studio) and in [this video tutorial](https://www.youtube.com/watch?v=HfDgC2oNnwo). 3. Inside the main project subgraph repository, authenticate the subgraph to deploy and build on the studio: ```sh @@ -33,13 +33,13 @@ graph auth --studio graph codegen && graph build ``` -5. Deploy the subgraph to the Studio. Deploy the subgraph to the Studio. You can find your `` in the Studio UI, which is based on the name of your subgraph. +5. Deploy the subgraph to the Studio. You can find your `` in the Studio UI, which is based on the name of your subgraph. ```sh graph deploy --studio ``` -6. Test queries on the Studio's playground. Test queries on the Studio's playground. Here are some examples for the [Sushi - Mainnet Exchange Subgraph](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&view=Playground): +6. Test queries on the Studio's playground. Here are some examples for the [Sushi - Mainnet Exchange Subgraph](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&view=Playground): ```sh { @@ -56,19 +56,19 @@ graph codegen && graph build } ``` -7. Fill in the description and the details of your subgraph and choose up to 3 categories. Upload a project image in the Studio if you'd like as well. Upload a project image in the Studio if you'd like as well. +7. Fill in the description and the details of your subgraph and choose up to 3 categories. Upload a project image in the Studio if you'd like as well. 8. Publish the subgraph on The Graph's Network by hitting the "Publish" button. -- Remember that publishing is an on-chain action and will require gas to be paid for in Ethereum - see an example transaction [here](https://etherscan.io/tx/0xd0c3fa0bc035703c9ba1ce40c1862559b9c5b6ea1198b3320871d535aa0de87b). Prices are roughly around 0.0425 ETH at 100 gwei. Prices are roughly around 0.0425 ETH at 100 gwei. +- Remember that publishing is an on-chain action and will require gas to be paid for in Ethereum - see an example transaction [here](https://etherscan.io/tx/0xd0c3fa0bc035703c9ba1ce40c1862559b9c5b6ea1198b3320871d535aa0de87b). Prices are roughly around 0.0425 ETH at 100 gwei. - Any time you need to upgrade your subgraph, you will be charged an upgrade fee. Remember, upgrading is just publishing another version of your existing subgraph on-chain. Because this incurs a cost, it is highly recommended to deploy and test your subgraph on Rinkeby before deploying to mainnet. It can, in some cases, also require some GRT if there is no signal on that subgraph. In the case there is signal/curation on that subgraph version (using auto-migrate), the taxes will be split. -And that's it! And that's it! After you are done publishing, you'll be able to view your subgraphs live on the network via [The Graph Explorer](https://thegraph.com/explorer). +And that's it! After you are done publishing, you'll be able to view your subgraphs live on the network via [The Graph Explorer](https://thegraph.com/explorer). ### Upgrading a Subgraph on the Network If you would like to upgrade an existing subgraph on the network, you can do this by deploying a new version of your subgraph to the Subgraph Studio using the Graph CLI. -1. Make changes to your current subgraph. Make changes to your current subgraph. A good idea is to test small fixes on the Subgraph Studio by publishing to Rinkeby. +1. Make changes to your current subgraph. A good idea is to test small fixes on the Subgraph Studio by publishing to Rinkeby. 2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): ```sh @@ -76,17 +76,17 @@ graph deploy --studio ``` 3. Test the new version in the Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). Remember that this requires gas (as described in the section above). +4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). ### Owner Upgrade Fee: Deep Dive -An upgrade requires GRT to be migrated from the old version of the subgraph to the new version. An upgrade requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every upgrade, a new bonding curve will be created (more on bonding curves [here](/curating#bonding-curve-101)). +An upgrade requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every upgrade, a new bonding curve will be created (more on bonding curves [here](/curating#bonding-curve-101)). The new bonding curve charges the 2.5% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this, or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive upgrade calls. The example below is only the case if your subgraph is being actively curated on. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal yourself on your own subgraph. - 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner upgrades to v2. Owner upgrades to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the upgrade, otherwise the upgrade will not succeed. This happens in the same transaction as the upgrade. This happens in the same transaction as the upgrade. +- Owner upgrades to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned +- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the upgrade, otherwise the upgrade will not succeed. This happens in the same transaction as the upgrade. _While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of upgrades for subgraph developers._ @@ -94,13 +94,13 @@ _While this mechanism is currently live on the network, the community is current If you're making a lot of changes to your subgraph, it is not a good idea to continually upgrade it and front the upgrade costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective, but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an upgrade so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/8tgJ7rKW) on Discord to let Indexers know when you're versioning your subgraphs. -Subgraphs are open API that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. +Subgraphs are open API that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. ### Updating the Metadata of a Subgraph -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields. +You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields. -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. If this is checked, an an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. +Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. ## Best Practices for Deploying a Subgraph to The Graph Network @@ -119,7 +119,7 @@ Follow the steps [here](/developer/deprecating-a-subgraph) to deprecate your sub The Hosted Service was set up to allow developers to deploy their subgraphs without any restrictions. -In order for The Graph Network to truly be decentralized, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/studio/billing). For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/studio/billing). +In order for The Graph Network to truly be decentralized, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/studio/billing). ### Estimate Query Fees on the Network @@ -133,7 +133,7 @@ Remember that it's a dynamic and growing market, but how you interact with it is ## Additional Resources -If you're still confused, fear not! If you're still confused, fear not! Check out the following resources or watch our video guide on migrating subgraphs to the decentralized network below: +If you're still confused, fear not! Check out the following resources or watch our video guide on migrating subgraphs to the decentralized network below:
+
+ +Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. + +You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! + +Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. + +**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! + +## Versioning your Subgraph with the CLI + +Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. + +Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. + +Please note that there are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, developers must also fund a part of the curation tax on auto-migrating signal. You cannot publish a new version of your subgraph if curators have not signaled on it. For more information on the risks of curation, please read more [here](/curating). + +### Automatic Archiving of Subgraph Versions + +Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. + +![Subgraph Studio - Unarchive](/img/Unarchive.png) + +## Managing your API Keys + +Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. + +The Studio will list out existing API keys, which will give you the ability to manage or delete them. + +1. The **Overview** section will allow you to: + - Edit your key name + - Regenerate API keys + - View the current usage of the API key with stats: + - Number of queries + - Amount of GRT spent +2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: + - View and manage the domain names authorized to use your API key + - Assign subgraphs that can be queried with your API key + +## How to Manage your Subgraph + +API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. + +- The **Status** tag allows you to pick between a variety of tags including ``, ``, ``, ``, etc. +- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. From 7ac52e7f78b12faa78230c4c44e69cf82dd6c4ad Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:03 -0500 Subject: [PATCH 143/432] New translations studio-faq.mdx (Vietnamese) --- pages/vi/studio/studio-faq.mdx | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 pages/vi/studio/studio-faq.mdx diff --git a/pages/vi/studio/studio-faq.mdx b/pages/vi/studio/studio-faq.mdx new file mode 100644 index 000000000000..4db4d7ccddaa --- /dev/null +++ b/pages/vi/studio/studio-faq.mdx @@ -0,0 +1,21 @@ +--- +title: Subgraph Studio FAQs +--- + +### 1. How do I create an API Key? + +In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. + +### 2. Can I create multiple API Keys? + +A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). + +### 3. How do I restrict a domain for an API Key? + +After creating an API Key, in the Security section you can define the domains that can query a specific API Key. + +### 4. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? + +You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in the Subgraph Studio. + +Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. From 14a3457812df00ced7904732b6dda1830ca4c7c9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:04 -0500 Subject: [PATCH 144/432] New translations multisig.mdx (Vietnamese) --- pages/vi/studio/multisig.mdx | 82 ++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 pages/vi/studio/multisig.mdx diff --git a/pages/vi/studio/multisig.mdx b/pages/vi/studio/multisig.mdx new file mode 100644 index 000000000000..164835bdb8a4 --- /dev/null +++ b/pages/vi/studio/multisig.mdx @@ -0,0 +1,82 @@ +--- +title: Using a Multisig Wallet +--- + +Subgraph Studio currently doesn't support signing with multisig wallets. Until then, you can follow this guide on how to publish your subgraph by invoking the [GNS contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol) functions. + +### Create a Subgraph + +Similary to using a regular wallet, you can create a subgraph by connecting your non-multisig wallet in Subgraph Studio. Once you connect the wallet, simply create a new subgraph. Make sure you fill out all the details, such as subgraph name, description, image, website, and source code url if applicable. + +For initializing a starter subgraph, you can follow the commands shown in the UI, or simply run + +``` +graph init --studio +``` + +`SUBGRAPH_SLUG` is the name of your subgraph that you can copy from the UI, or from the URL in the browser. This command should create a folder in your file system with all the necessary files to start developing a subgraph. + +### Deploy a Subgraph + +Once your subgraph is ready to be deployed to the graph node, simply follow the commands shown in the UI, or run the following command: + +``` +graph deploy --studio +``` + +**Note**: Make sure that you are inside of the subgraph folder before running the command. + +### Publish a Subgraph or a Version + +You can either publish a new subgraph to the decentralized network or publish a new version of the previously published subgraph. + +#### Publish a New Subgraph + +There are a couple of ways to publish a subgraph using multisig wallets. Here we'll describe invoking the [`publishNewSubgraph`](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol#L231) function in the [GNS contract](https://etherscan.io/address/0xaDcA0dd4729c8BA3aCf3E99F3A9f471EF37b6825) using Etherscan. + +Before we use that function, we need to generate input arguments for it. Access [this page](https://thegraph.com/studio/multisig) in Subgraph Studio and provide the following: + +- Ethereum address of your multisig wallet +- Subgraph that you want to publish +- Version that you want to publish + +After clicking on "Get Arguments", we'll generate all the contract arguments for you! + +There should be 4 arguments: + +- `graphAccount`: which is your multisig account address +- `subgraphDeploymentID`: the hex hash of the deployment ID for that subgraph +- `versionMetadata`: version metadata (label and description) that gets uploaded to IPFS. The hex hash value for that JSON file will be provided. +- `subgraphMetadata`: simlar to version metadata, subgraph metadata (name, image, description, website and source code url) gets uploaded to IPFS, and we provide the hex hash value for that JSON file + +With those 4 arguments, you should be able to: + +- Visit [our GraphProxy](https://etherscan.io/address/0xaDcA0dd4729c8BA3aCf3E99F3A9f471EF37b6825#writeProxyContract) contract on Etherscan +- Connect to Etherscan using WalletConnect via the WalletConnect Safe app of your multisig +- Call the `publishNewSubgraph` method with the paramaters that were generated by our tool + +#### Publish a New Version + +To publish a new version of an existing subgraph we first need to generate input arguments for it. Access [this page](https://thegraph.com/studio/multisig) in Subgraph Studio and provide: + +- Ethereum address of your multisig wallet +- Subgraph that you want to publish +- Version that you want to publish +- The ID of the subgraph you want to update in Graph Explorer + +After clicking on "Get Arguments" we'll generate all the contract arguments for you! + +On the right side of the UI under the `Publish New Version` title, there should be 4 arguments: + +- `graphAccount`: which is your Multisig account address +- `subgraphNumber`: is the number of your already published subgraph. It is a part of the subgraph id for a published subgraph queried through The Graph Network subgraph. +- `subgraphDeploymentID`: which is the hex hash of the deployment ID for that subgraph +- `versionMetadata`: version metadata (label and description) gets uploaded to IPFS, and we provide the hex hash value for that JSON file + +Now that we generated all the arguments you are ready to proceed and call the `publishNewVersion` method. In order to do so, you should: + +- Visit [the GraphProxy](https://etherscan.io/address/0xaDcA0dd4729c8BA3aCf3E99F3A9f471EF37b6825#writeProxyContract) contract on Etherscan +- Connect to Etherscan using WalletConnect via the WalletConnect Safe app of your Multisig +- Call the `publishNewVersion` method with the paramaters that were generated by our tool + +Once the transaction is successful, your subgraph should have a new version of your subgraph in Graph Explorer which means that curators can start signaling on it and indexers can start indexing it. From e53203b7b9e672211029b2370792819f9c674d99 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:05 -0500 Subject: [PATCH 145/432] New translations deploy-subgraph-studio.mdx (Vietnamese) --- pages/vi/studio/deploy-subgraph-studio.mdx | 68 ++++++++++++++++++++++ 1 file changed, 68 insertions(+) create mode 100644 pages/vi/studio/deploy-subgraph-studio.mdx diff --git a/pages/vi/studio/deploy-subgraph-studio.mdx b/pages/vi/studio/deploy-subgraph-studio.mdx new file mode 100644 index 000000000000..2155d8fe8976 --- /dev/null +++ b/pages/vi/studio/deploy-subgraph-studio.mdx @@ -0,0 +1,68 @@ +--- +title: Deploy a Subgraph to the Subgraph Studio +--- + +Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: + +- Install The Graph CLI (with both yarn and npm) +- Create your Subgraph in the Subgraph Studio +- Authenticate your account from the CLI +- Deploying a Subgraph to the Subgraph Studio + +## Installing Graph CLI + +We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. + +**Install with yarn:** + +```bash +yarn global add @graphprotocol/graph-cli +``` + +**Install with npm:** + +```bash +npm install -g @graphprotocol/graph-cli +``` + +## Create your Subgraph in Subgraph Studio + +Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. + +## Initialize your Subgraph + +Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: + +```bash +graph init --studio +``` + +The `` value can be found on your subgraph details page in Subgraph Studio: + +![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) + +After running `graph init`, you will be asked to input the contract address, network and abi that you want to query. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. + +## Graph Auth + +Before being able to deploy your subgraph to Subgraph Studio, you need to login to your account within the CLI. To do this, you will need your deploy key that you can find on your "My Subgraphs" page or on your subgraph details page. + +Here is the command that you need to use to authenticate from the CLI: + +```bash +graph auth --studio +``` + +## Deploying a Subgraph to Subgraph Studio + +Once you are ready, you can deploy your subgraph to Subgraph Studio. Doing this won't publish your subgraph to the decentralized network, it will only deploy it to your Studio account where you will be able to test it and update the metadata. + +Here is the CLI command that you need to use to deploy your subgraph. + +```bash +graph deploy --studio +``` + +After running this command, the CLI will ask for a version label, you can name it however you want, you can use labels such as `0.1` and `0.2` or use letters as well such as `uniswap-v2-0.1` . Those labels will be visible in Graph Explorer and can be used by curators to decide if they want to signal on this version or not, so choose them wisely. + +Once deployed, you can test your subgraph in Subgraph Studio using the playground, deploy another version if needed, update the metadata, and when you are ready, publish your subgraph to Graph Explorer. From 2d5269294d509083d8aaa265a315471226a22b85 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:06 -0500 Subject: [PATCH 146/432] New translations billing.mdx (Vietnamese) --- pages/vi/studio/billing.mdx | 68 +++++++++++++++++++++++++++++++++++++ 1 file changed, 68 insertions(+) create mode 100644 pages/vi/studio/billing.mdx diff --git a/pages/vi/studio/billing.mdx b/pages/vi/studio/billing.mdx new file mode 100644 index 000000000000..588cd2ed2f40 --- /dev/null +++ b/pages/vi/studio/billing.mdx @@ -0,0 +1,68 @@ +--- +title: Billing on the Subgraph Studio +--- + +### Overview + +Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: + +- Add and remove GRT +- Keep track of your balances based on how much GRT you have added to your account, how much you have removed, and your invoices +- Automatically clear payments based on query fees generated + +In order to add GRT to your account, you will need to go through the following steps: + +1. Purchase GRT and ETH on an exchange of your choice +2. Send the GRT and ETH to your wallet +3. Bridge GRT to Polygon using the UI + + a) You will receive 0.001 Matic in a few minutes after you send any amount of GRT to the Polygon bridge. You can track the transaction on [Polygonscan](https://polygonscan.com/) by inputting your address into the search bar. + +4. Add bridged GRT to the billing contract on Polygon. The billing contract address is: [0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). + + a) In order to complete step #4, you'll need to switch your network in your wallet to Polygon. You can add Polygon's network by connecting your wallet and clicking on "Choose Matic (Polygon) Mainnet" [here.](https://chainlist.org/) Once you've added the network, switch it over in your wallet by navigating to the network pill on the top right hand side corner. In Metamask, the network is called **Matic Mainnnet.** + +At the end of each week, if you used your API keys, you will receive an invoice based on the query fees you have generated during this period. This invoice will be paid using GRT available in your balance. Query volume is evaluated by the API keys you own. Your balance will be updated after fees are withdrawn. + +#### Here’s how you go through the invoicing process: + +There are 4 states your invoice can be in: + +1. Created - your invoice has just been created and not been paid yet +2. Paid - your invoice has been successfully paid +3. Unpaid - there is not enough GRT in your balance on the billing contract +4. Error - there is an error processing the payment + +**See the diagram below for more information:** + +![Billing Flow](/img/billing-flow.png) + +For a quick demo of how billing works on the Subgraph Studio, check out the video below: + +
+ +
+ +### Multisig Users + +Multisigs are smart-contracts that can exist only on the network they have been created, so if you created one on Ethereum Mainnet - it will only exist on Mainnet. Since our billing uses Polygon, if you were to bridge GRT to the multisig address on Polygon the funds would be lost. + +To overcome this issue, we created [a dedicated tool](https://multisig-billing.thegraph.com/) that will help you deposit GRT on our billing contract (on behalf of the multisig) with a standard wallet / EOA (an account controlled by a private key). + +You can access our Multisig Billing Tool here: https://multisig-billing.thegraph.com/ + +This tool will guide you to go through the following steps: + +1. Connect your standard wallet / EOA (this wallet needs to own some ETH as well as the GRT you want to deposit) +2. Bridge GRT to Polygon. You will have to wait 7-8 minutes after the transaction is complete for the bridge transfer to be finalized. +3. Once your GRT is available on your Polygon balance you can deposit them to the billing contract while specifying the multisig address you are funding in the `Multisig Address` field. + +Once the deposit transaction has been confirmed you can go back to [Subgraph Studio](https://thegraph.com/studio/) and connect with your Gnosis Safe Multisig to create API keys and use them to generate queries. + +Those queries will generate invoices that will be paid automatically using the multisig’s billing balance. From 8d5ad5dec56702dc9f15a09678f84c62a6543a1b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:08 -0500 Subject: [PATCH 147/432] New translations indexing.mdx (Vietnamese) --- pages/vi/indexing.mdx | 670 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 670 insertions(+) create mode 100644 pages/vi/indexing.mdx diff --git a/pages/vi/indexing.mdx b/pages/vi/indexing.mdx new file mode 100644 index 000000000000..090b1be2b226 --- /dev/null +++ b/pages/vi/indexing.mdx @@ -0,0 +1,670 @@ +--- +title: Indexer +--- + +import { Difficulty } from '@/components' + +Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. + +GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. + +Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. + + + +## FAQ + +### What is the minimum stake required to be an indexer on the network? + +The minimum stake for an indexer is currently set to 100K GRT. + +### What are the revenue streams for an indexer? + +**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. + +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. + +### How are rewards distributed? + +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** + +Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). + +### What is a proof of indexing (POI)? + +POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. + +### When are indexing rewards distributed? + +Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). + +### Can pending indexer rewards be monitored? + +The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. + +Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: + +1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: + +```graphql +query indexerAllocations { + indexer(id: "") { + allocations { + activeForIndexer { + allocations { + id + } + } + } + } +} +``` + +Use Etherscan to call `getRewards()`: + +- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) + +* To call `getRewards()`: + - Expand the **10. getRewards** dropdown. + - Enter the **allocationID** in the input. + - Click the **Query** button. + +### What are disputes and where can I view them? + +Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. + +Disputes have **three** possible outcomes, so does the deposit of the Fishermen. + +- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. +- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. +- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. + +Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. + +### What are query fee rebates and when are they distributed? + +Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. + +Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. + +### What is query fee cut and indexing reward cut? + +The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. + +- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. + +- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. + +### How do indexers know which subgraphs to index? + +Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: + +- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. + +- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. + +- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. + +- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. + +### What are the hardware requirements? + +- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. +- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. + +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | + +### What are some basic security precautions an indexer should take? + +- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. + +- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. + +## Infrastructure + +At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. + +- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. + +- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. + +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. + +- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. + +- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. + +Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. + +### Ports overview + +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. + +#### Graph Node + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | + +#### Indexer Service + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | + +#### Indexer Agent + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | +| 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | + +### Setup server infrastructure using Terraform on Google Cloud + +#### Install prerequisites + +- Google Cloud SDK +- Kubectl command line tool +- Terraform + +#### Create a Google Cloud Project + +- Clone or navigate to the indexer repository. + +- Navigate to the ./terraform directory, this is where all commands should be executed. + +```sh +cd terraform +``` + +- Authenticate with Google Cloud and create a new project. + +```sh +gcloud auth login +project= +gcloud projects create --enable-cloud-apis $project +``` + +- Use the Google Cloud Console's billing page to enable billing for the new project. + +- Create a Google Cloud configuration. + +```sh +proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") +gcloud config configurations create $project +gcloud config set project "$proj_id" +gcloud config set compute/region us-central1 +gcloud config set compute/zone us-central1-a +``` + +- Enable required Google Cloud APIs. + +```sh +gcloud services enable compute.googleapis.com +gcloud services enable container.googleapis.com +gcloud services enable servicenetworking.googleapis.com +gcloud services enable sqladmin.googleapis.com +``` + +- Create a service account. + +```sh +svc_name= +gcloud iam service-accounts create $svc_name \ + --description="Service account for Terraform" \ + --display-name="$svc_name" +gcloud iam service-accounts list +# Get the email of the service account from the list +svc=$(gcloud iam service-accounts list --format='get(email)' +--filter="displayName=$svc_name") +gcloud iam service-accounts keys create .gcloud-credentials.json \ + --iam-account="$svc" +gcloud projects add-iam-policy-binding $proj_id \ + --member serviceAccount:$svc \ + --role roles/editor +``` + +- Enable peering between database and Kubernetes cluster that will be created in the next step. + +```sh +gcloud compute addresses create google-managed-services-default \ + --prefix-length=20 \ + --purpose=VPC_PEERING \ + --network default \ + --global \ + --description 'IP Range for peer networks.' +gcloud services vpc-peerings connect \ + --network=default \ + --ranges=google-managed-services-default +``` + +- Create minimal terraform configuration file (update as needed). + +```sh +indexer= +cat > terraform.tfvars < \ + -f Dockerfile.indexer-service \ + -t indexer-service:latest \ +# Indexer agent +docker build \ + --build-arg NPM_TOKEN= \ + -f Dockerfile.indexer-agent \ + -t indexer-agent:latest \ +``` + +- Run the components + +```sh +docker run -p 7600:7600 -it indexer-service:latest ... +docker run -p 18000:8000 -it indexer-agent:latest ... +``` + +**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). + +#### Using K8s and Terraform + +See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section + +#### Usage + +> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). + +#### Indexer agent + +```sh +graph-indexer-agent start \ + --ethereum \ + --ethereum-network mainnet \ + --mnemonic \ + --indexer-address \ + --graph-node-query-endpoint http://localhost:8000/ \ + --graph-node-status-endpoint http://localhost:8030/graphql \ + --graph-node-admin-endpoint http://localhost:8020/ \ + --public-indexer-url http://localhost:7600/ \ + --indexer-geo-coordinates \ + --index-node-ids default \ + --indexer-management-port 18000 \ + --metrics-port 7040 \ + --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + --default-allocation-amount 100 \ + --register true \ + --inject-dai true \ + --postgres-host localhost \ + --postgres-port 5432 \ + --postgres-username \ + --postgres-password \ + --postgres-database indexer \ + | pino-pretty +``` + +#### Indexer service + +```sh +SERVER_HOST=localhost \ +SERVER_PORT=5432 \ +SERVER_DB_NAME=is_staging \ +SERVER_DB_USER= \ +SERVER_DB_PASSWORD= \ +graph-indexer-service start \ + --ethereum \ + --ethereum-network mainnet \ + --mnemonic \ + --indexer-address \ + --port 7600 \ + --metrics-port 7300 \ + --graph-node-query-endpoint http://localhost:8000/ \ + --graph-node-status-endpoint http://localhost:8030/graphql \ + --postgres-host localhost \ + --postgres-port 5432 \ + --postgres-username \ + --postgres-password \ + --postgres-database is_staging \ + --network-subgraph-endpoint https://gateway.network.thegraph.com/network \ + | pino-pretty +``` + +#### Indexer CLI + +The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. + +```sh +graph indexer connect http://localhost:18000 +graph indexer status +``` + +#### Indexer management using indexer CLI + +The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. + +#### Usage + +The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. + +- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) + +- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. + +- `graph indexer rules set [options] ...` - Set one or more indexing rules. + +- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. + +- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. + +- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. + +All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. + +#### Indexing rules + +Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. + +For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. + +Data model: + +```graphql +type IndexingRule { + deployment: string + allocationAmount: string | null + parallelAllocations: number | null + decisionBasis: IndexingDecisionBasis + maxAllocationPercentage: number | null + minSignal: string | null + maxSignal: string | null + minStake: string | null + minAverageQueryFees: string | null + custom: string | null +} + +IndexingDecisionBasis { + rules + never + always +} +``` + +#### Cost models + +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. + +#### Agora + +The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. + +A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. + +Example cost model: + +``` +# This statement captures the skip value, +# uses a boolean expression in the predicate to match specific queries that use `skip` +# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; + +# This default will match any GraphQL expression. +# It uses a Global substituted into the expression to calculate cost +default => 0.1 * $SYSTEM_LOAD; +``` + +Example query costing using the above model: + +| Query | Price | +| ---------------------------------------------------------------------------- | ------- | +| { pairs(skip: 5000) { id } } | 0.5 GRT | +| { tokens { symbol } } | 0.1 GRT | +| { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | + +#### Applying the cost model + +Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. + +```sh +indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' +indexer cost set model my_model.agora +``` + +## Interacting with the network + +### Stake in the protocol + +The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ + +Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. + +#### Approve tokens + +1. Open the [Remix app](https://remix.ethereum.org/) in a browser + +2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). + +3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. + +4. Under environment select `Injected Web3` and under `Account` select your indexer address. + +5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. + +6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). + +#### Stake tokens + +1. Open the [Remix app](https://remix.ethereum.org/) in a browser + +2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. + +3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. + +4. Under environment select `Injected Web3` and under `Account` select your indexer address. + +5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. + +6. Call `stake()` to stake GRT in the protocol. + +7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. + +8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. + +``` +setDelegationParameters(950000, 600000, 500) +``` + +### The life of an allocation + +After being created by an indexer a healthy allocation goes through four states. + +- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. + +- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). + +- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. + +- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. From b8cbf31781283f60f262ad5cdee030b018802577 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:09 -0500 Subject: [PATCH 148/432] New translations what-is-hosted-service.mdx (Vietnamese) --- .../hosted-service/what-is-hosted-service.mdx | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 pages/vi/hosted-service/what-is-hosted-service.mdx diff --git a/pages/vi/hosted-service/what-is-hosted-service.mdx b/pages/vi/hosted-service/what-is-hosted-service.mdx new file mode 100644 index 000000000000..7f604c8dc31a --- /dev/null +++ b/pages/vi/hosted-service/what-is-hosted-service.mdx @@ -0,0 +1,77 @@ +--- +title: What is the Hosted Service? +--- + +This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) + +If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. + +## Create a Subgraph + +First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` + +### From an Existing Contract + +If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from this contract can be a good way to get started on the Hosted Service. + +You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). + +```sh +graph init \ + --product hosted-service + --from-contract \ + / [] +``` + +Additionally, you can use the following optional arguments. If the ABI cannot be fetched from Etherscan, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form. + +```sh +--network \ +--abi \ +``` + +The `` in this case is your github user or organization name, `` is the name for your subgraph, and `` is the optional name of the directory where graph init will put the example subgraph manifest. The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** + +### From an Example Subgraph + +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: + +``` +graph init --from-example --product hosted-service / [] +``` + +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. + +## Supported Networks on the Hosted Service + +Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) + +- `mainnet` +- `kovan` +- `rinkeby` +- `ropsten` +- `goerli` +- `poa-core` +- `poa-sokol` +- `xdai` +- `near-mainnet` +- `near-testnet` +- `matic` +- `mumbai` +- `fantom` +- `bsc` +- `chapel` +- `clover` +- `avalanche` +- `fuji` +- `celo` +- `celo-alfajores` +- `fuse` +- `moonriver` +- `mbase` +- `arbitrum-one` +- `arbitrum-rinkeby` +- `optimism` +- `optimism-kovan` +- `aurora` +- `aurora-testnet` From ee23f321547899b1c0deb25b7f6dd7af8dcc1375 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:10 -0500 Subject: [PATCH 149/432] New translations query-hosted-service.mdx (Vietnamese) --- .../hosted-service/query-hosted-service.mdx | 28 +++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 pages/vi/hosted-service/query-hosted-service.mdx diff --git a/pages/vi/hosted-service/query-hosted-service.mdx b/pages/vi/hosted-service/query-hosted-service.mdx new file mode 100644 index 000000000000..731e3a3120b2 --- /dev/null +++ b/pages/vi/hosted-service/query-hosted-service.mdx @@ -0,0 +1,28 @@ +--- +title: Query the Hosted Service +--- + +With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. + +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. + +#### Example + +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: + +```graphql +{ + counters { + id + value + } +} +``` + +## Using The Hosted Service + +The Graph Explorer and its GraphQL playground is a useful way to explore and query deployed subgraphs on the Hosted Service. + +Some of the main features are detailed below: + +![Explorer Playground](/img/explorer-playground.png) From 47121361c66601f7f379fb0e0eb3d7214c609bf2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:11 -0500 Subject: [PATCH 150/432] New translations migrating-subgraph.mdx (Vietnamese) --- .../vi/hosted-service/migrating-subgraph.mdx | 151 ++++++++++++++++++ 1 file changed, 151 insertions(+) create mode 100644 pages/vi/hosted-service/migrating-subgraph.mdx diff --git a/pages/vi/hosted-service/migrating-subgraph.mdx b/pages/vi/hosted-service/migrating-subgraph.mdx new file mode 100644 index 000000000000..85f72f053b30 --- /dev/null +++ b/pages/vi/hosted-service/migrating-subgraph.mdx @@ -0,0 +1,151 @@ +--- +title: Migrating an Existing Subgraph to The Graph Network +--- + +## Introduction + +This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. + +This will tell you everything you need to know about how to migrate to the decentralized network and manage your subgraphs moving forward. The process is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. + +### Migrating An Existing Subgraph to The Graph Network + +1. Get the latest version of the graph-cli installed: + +```sh +npm install -g @graphprotocol/graph-cli +``` + +```sh +yarn global add @graphprotocol/graph-cli +``` + +2. Create a subgraph on the [Subgraph Studio](https://thegraph.com/studio/). Guides on how to do that can be found in the [Subgraph Studio docs](/studio/subgraph-studio) and in [this video tutorial](https://www.youtube.com/watch?v=HfDgC2oNnwo). +3. Inside the main project subgraph repository, authenticate the subgraph to deploy and build on the studio: + +```sh +graph auth --studio +``` + +4. Generate files and build the subgraph: + +```sh +graph codegen && graph build +``` + +5. Deploy the subgraph to the Studio. You can find your `` in the Studio UI, which is based on the name of your subgraph. + +```sh + graph deploy --studio +``` + +6. Test queries on the Studio's playground. Here are some examples for the [Sushi - Mainnet Exchange Subgraph](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&view=Playground): + +```sh +{ + users(first: 5) { + id + liquidityPositions { + id + } + } + bundles(first: 5) { + id + ethPrice + } +} +``` + +7. Fill in the description and the details of your subgraph and choose up to 3 categories. Upload a project image in the Studio if you'd like as well. +8. Publish the subgraph on The Graph's Network by hitting the "Publish" button. + +- Remember that publishing is an on-chain action and will require gas to be paid for in Ethereum - see an example transaction [here](https://etherscan.io/tx/0xd0c3fa0bc035703c9ba1ce40c1862559b9c5b6ea1198b3320871d535aa0de87b). Prices are roughly around 0.0425 ETH at 100 gwei. +- Any time you need to upgrade your subgraph, you will be charged an upgrade fee. Remember, upgrading is just publishing another version of your existing subgraph on-chain. Because this incurs a cost, it is highly recommended to deploy and test your subgraph on Rinkeby before deploying to mainnet. It can, in some cases, also require some GRT if there is no signal on that subgraph. In the case there is signal/curation on that subgraph version (using auto-migrate), the taxes will be split. + +And that's it! After you are done publishing, you'll be able to view your subgraphs live on the network via [The Graph Explorer](https://thegraph.com/explorer). + +### Upgrading a Subgraph on the Network + +If you would like to upgrade an existing subgraph on the network, you can do this by deploying a new version of your subgraph to the Subgraph Studio using the Graph CLI. + +1. Make changes to your current subgraph. A good idea is to test small fixes on the Subgraph Studio by publishing to Rinkeby. +2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): + +```sh +graph deploy --studio +``` + +3. Test the new version in the Subgraph Studio by querying in the playground +4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). + +### Owner Upgrade Fee: Deep Dive + +An upgrade requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every upgrade, a new bonding curve will be created (more on bonding curves [here](/curating#bonding-curve-101)). + +The new bonding curve charges the 2.5% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this, or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive upgrade calls. The example below is only the case if your subgraph is being actively curated on. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal yourself on your own subgraph. + +- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph +- Owner upgrades to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned +- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the upgrade, otherwise the upgrade will not succeed. This happens in the same transaction as the upgrade. + +_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of upgrades for subgraph developers._ + +### Maintaining a Stable Version of a Subgraph + +If you're making a lot of changes to your subgraph, it is not a good idea to continually upgrade it and front the upgrade costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective, but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an upgrade so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/8tgJ7rKW) on Discord to let Indexers know when you're versioning your subgraphs. + +Subgraphs are open API that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. + +### Updating the Metadata of a Subgraph + +You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields. + +Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. + +## Best Practices for Deploying a Subgraph to The Graph Network + +1. Leveraging an ENS name for Subgraph Development + +- Set up your ENS: [https://app.ens.domains/](https://app.ens.domains/) +- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). + +The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. + +## Deprecating a Subgraph on The Graph Network + +Follow the steps [here](/developer/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. + +## Querying a Subgraph + Billing on The Graph Network + +The Hosted Service was set up to allow developers to deploy their subgraphs without any restrictions. + +In order for The Graph Network to truly be decentralized, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/studio/billing). + +### Estimate Query Fees on the Network + +While this is not a live feature in the product UI, you can set your maximum budget per query by taking the amount you're willing to pay per month and divide it by your expected query volume. + +While you get to decide on your query budget, there is no guarantee that an Indexer will be willing to serve queries at that price. If a Gateway can match you to an Indexer willing to serve a query at, or lower than, the price you are willing to pay, you will pay the delta/difference of your budget **and** their price. As a consequence of that, a lower query price reduces the pool of Indexers available to you, which may affect the quality of service you receive. It's beneficial to have high query fees, as that may attract curation and big name Indexers to your subgraph. + +Remember that it's a dynamic and growing market, but how you interact with it is in your control. There is no maximum or minimum price specified in the protocol or in the Gateways. For example, you can look at the price paid by a few of the dapps on the network (on a per week basis), below. See the last column which shows query fees in GRT. For example, [Pickle Finance](https://www.pickle.finance/) has 8 requests per second and paid 2.4 GRT for one week. + +![QueryFee](/img/QueryFee.png) + +## Additional Resources + +If you're still confused, fear not! Check out the following resources or watch our video guide on migrating subgraphs to the decentralized network below: + +
+ +
+ +- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) +- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around + - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` +- [Subgraph Studio documentation](/studio/subgraph-studio) From c20cf3b16f924fc4de102ab60dee7f9d7773a683 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:12 -0500 Subject: [PATCH 151/432] New translations deploy-subgraph-hosted.mdx (Vietnamese) --- .../hosted-service/deploy-subgraph-hosted.mdx | 160 ++++++++++++++++++ 1 file changed, 160 insertions(+) create mode 100644 pages/vi/hosted-service/deploy-subgraph-hosted.mdx diff --git a/pages/vi/hosted-service/deploy-subgraph-hosted.mdx b/pages/vi/hosted-service/deploy-subgraph-hosted.mdx new file mode 100644 index 000000000000..bdc532e205e4 --- /dev/null +++ b/pages/vi/hosted-service/deploy-subgraph-hosted.mdx @@ -0,0 +1,160 @@ +--- +title: Deploy a Subgraph to the Hosted Service +--- + +If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. + +## Create a Hosted Service account + +Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. + +## Store the Access Token + +After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. + +## Create a Subgraph on the Hosted Service + +Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: + +**Image** - Select an image to be used as a preview image and thumbnail for the subgraph. + +**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ + +**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ + +**Subtitle** - Text that will appear in subgraph cards. + +**Description** - Description of the subgraph, visible on the subgraph details page. + +**GitHub URL** - Link to the subgraph repository on GitHub. + +**Hide** - Switching this on hides the subgraph in the Graph Explorer. + +After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). + +## Deploy a Subgraph on the Hosted Service + +Deploying your subgraph will upload the subgraph files that you've built with `yarn build` to IPFS and tell the Graph Explorer to start indexing your subgraph using these files. + +You deploy the subgraph by running `yarn deploy` + +After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. + +## Redeploying a Subgraph + +When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. + +If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. + +### Deploying the subgraph to multiple Ethereum networks + +In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "ropsten", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:ropsten": "mustache config/ropsten.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Ropsten you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Ropsten: +yarn prepare:ropsten && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. + +## Subgraph archive policy + +The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. + +To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs which are inactive. + +**A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 30 days.** + +Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. From 948f781772bd5658eb7b75c29e041f928e6e4aa3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:13 -0500 Subject: [PATCH 152/432] New translations explorer.mdx (Vietnamese) --- pages/vi/explorer.mdx | 211 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 211 insertions(+) create mode 100644 pages/vi/explorer.mdx diff --git a/pages/vi/explorer.mdx b/pages/vi/explorer.mdx new file mode 100644 index 000000000000..a7b8c5204177 --- /dev/null +++ b/pages/vi/explorer.mdx @@ -0,0 +1,211 @@ +--- +title: The Graph Explorer +--- + +Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below): + +
+ +
+ +## Subgraphs + +First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. + +![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) + +When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. + +![Explorer Image 2](/img/Subgraph-Details.png) + +On each subgraph’s dedicated page, several details are surfaced. These include: + +- Signal/Un-signal on subgraphs +- View more details such as charts, current deployment ID, and other metadata +- Switch versions to explore past iterations of the subgraph +- Query subgraphs via GraphQL +- Test subgraphs in the playground +- View the Indexers that are indexing on a certain subgraph +- Subgraph stats (allocations, Curators, etc) +- View the entity who published the subgraph + +![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) + +## Participants + +Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. + +### 1. Indexers + +![Explorer Image 4](/img/Indexer-Pane.png) + +Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated +- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. + +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. + +To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) + +![Indexing details pane](/img/Indexing-Details-Pane.png) + +### 2. Curators + +Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: + +- The date the Curator started curating +- The number of GRT that was deposited +- The number of shares a Curator owns + +![Explorer Image 6](/img/Curation-Overview.png) + +If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) + +### 3. Delegators + +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. + +Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! + +![Explorer Image 7](/img/Delegation-Overview.png) + +The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: + +- The number of Indexers a Delegator is delegating towards +- A Delegator’s original delegation +- The rewards they have accumulated but have not withdrawn from the protocol +- The realized rewards they withdrew from the protocol +- Total amount of GRT they have currently in the protocol +- The date they last delegated at + +If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). + +## Mạng lưới + +In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. + +### Activity + +The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: + +- The current total network stake +- The stake split between the Indexers and their Delegators +- Total supply, minted, and burned GRT since the network inception +- Total Indexing rewards since the inception of the protocol +- Protocol parameters such as curation reward, inflation rate, and more +- Current epoch rewards and fees + +A few key details that are worth mentioning: + +- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). + +![Explorer Image 8](/img/Network-Stats.png) + +### Epochs + +In the Epochs section you can analyse on a per-epoch basis, metrics such as: + +- Epoch start or end block +- Query fees generated and indexing rewards collected during a specific epoch +- Epoch status, which refers to the query fee collection and distribution and can have different states: + - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees + - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. + - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. + +![Explorer Image 9](/img/Epoch-Stats.png) + +## Your User Profile + +Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: + +### Profile Overview + +This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). + +![Explorer Image 10](/img/Profile-Overview.png) + +### Subgraphs Tab + +If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. + +![Explorer Image 11](/img/Subgraphs-Overview.png) + +### Indexing Tab + +If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. + +This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: + +- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed +- Total Query Fees - the total fees that users have paid for queries served by you over time +- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT +- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators +- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators +- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior + +![Explorer Image 12](/img/Indexer-Stats.png) + +### Delegating Tab + +Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. + +In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. + +The Delegator metrics you’ll see here in this tab include: + +- Total delegation rewards +- Total unrealized rewards +- Total realized rewards + +In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). + +With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. + +Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). + +![Explorer Image 13](/img/Delegation-Stats.png) + +### Curating Tab + +In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. + +Within this tab, you’ll find an overview of: + +- All the subgraphs you're curating on with signal details +- Share totals per subgraph +- Query rewards per subgraph +- Updated at date details + +![Explorer Image 14](/img/Curation-Stats.png) + +## Your Profile Settings + +Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. + +- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set +- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. + +![Explorer Image 15](/img/Profile-Settings.png) + +As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. + +
![Wallet details](/img/Wallet-Details.png)
From 5a0cc77cd0883ae1262d56a0ae821b367b194a90 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:14 -0500 Subject: [PATCH 153/432] New translations quick-start.mdx (Vietnamese) --- pages/vi/developer/quick-start.mdx | 227 +++++++++++++++++++++++++++++ 1 file changed, 227 insertions(+) create mode 100644 pages/vi/developer/quick-start.mdx diff --git a/pages/vi/developer/quick-start.mdx b/pages/vi/developer/quick-start.mdx new file mode 100644 index 000000000000..6893d424ddc2 --- /dev/null +++ b/pages/vi/developer/quick-start.mdx @@ -0,0 +1,227 @@ +--- +title: Quick Start +--- + +This guide will quickly take you through how to initialize, create, and deploy your subgraph on: + +- **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** +- **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) + +## Subgraph Studio + +### 1. Install the Graph CLI + +The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. + +```sh +# NPM +$ npm install -g @graphprotocol/graph-cli + +# Yarn +$ yarn global add @graphprotocol/graph-cli +``` + +### 2. Initialize your Subgraph + +- Initialize your subgraph from an existing contract. + +```sh +graph init --studio +``` + +- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. + +![Subgraph command](/img/Subgraph-Slug.png) + +### 3. Write your Subgraph + +The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: + +- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. + +For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). + +### 4. Deploy to the Subgraph Studio + +- Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. +- Click "Create" and enter the subgraph slug you used in step 2. +- Run these commands in the subgraph folder + +```sh +$ graph codegen +$ graph build +``` + +- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. + +```sh +$ graph auth --studio +$ graph deploy --studio +``` + +- You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` + +### 5. Check your logs + +The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: + +```sh +{ + indexingStatuses(subgraphs: ["Qm..."]) { + node + synced + health + fatalError { + message + block { + number + hash + } + handler + } + nonFatalErrors { + message + block { + number + hash + } + handler + } + chains { + network + chainHeadBlock { + number + } + earliestBlock { + number + } + latestBlock { + number + } + lastHealthyBlock { + number + } + } + entityCount + } +} +``` + +### 6. Query your Subgraph + +You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). + +## Hosted Service + +### 1. Install the Graph CLI + +"The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. + +```sh +# NPM +$ npm install -g @graphprotocol/graph-cli + +# Yarn +$ yarn global add @graphprotocol/graph-cli +``` + +### 2. Initialize your Subgraph + +- Initialize your subgraph from an existing contract. + +```sh +$ graph init --product hosted-service --from-contract
+``` + +- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` + +- If you'd like to initialize from an example, run the command below: + +```sh +$ graph init --product hosted-service --from-example +``` + +- In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +### 3. Write your Subgraph + +The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: + +- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index +- Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph +- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema + +For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). + +### 4. Deploy your Subgraph + +- Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account +- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. +- Run codegen in the subgraph folder + +```sh + # NPM +$ npm run codegen + +# Yarn +$ yarn codegen +``` + +- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. + +```sh +$ graph auth --product hosted-service +$ graph deploy --product hosted-service / +``` + +### 5. Check your logs + +The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: + +```sh +{ + indexingStatuses(subgraphs: ["Qm..."]) { + node + synced + health + fatalError { + message + block { + number + hash + } + handler + } + nonFatalErrors { + message + block { + number + hash + } + handler + } + chains { + network + chainHeadBlock { + number + } + earliestBlock { + number + } + latestBlock { + number + } + lastHealthyBlock { + number + } + } + entityCount + } +} +``` + +### 6. Query your Subgraph + +Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. From 42cbf4990865b8c867a1a822f36c71e63d5862d4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:16 -0500 Subject: [PATCH 154/432] New translations query-the-graph.mdx (Vietnamese) --- pages/vi/developer/query-the-graph.mdx | 32 ++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 pages/vi/developer/query-the-graph.mdx diff --git a/pages/vi/developer/query-the-graph.mdx b/pages/vi/developer/query-the-graph.mdx new file mode 100644 index 000000000000..ae480b1e6883 --- /dev/null +++ b/pages/vi/developer/query-the-graph.mdx @@ -0,0 +1,32 @@ +--- +title: Query The Graph +--- + +With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. + +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. + +#### Example + +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: + +```graphql +{ + counters { + id + value + } +} +``` + +## Using The Graph Explorer + +Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. + +![Query Subgraph Pane](/img/query-subgraph-pane.png) + +As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). + +Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). + +You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. From 9162362b46aaaa15120bb11d99dfe1ab9f640047 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:17 -0500 Subject: [PATCH 155/432] New translations network.mdx (Vietnamese) --- pages/vi/about/network.mdx | 15 +++++++++++++++ 1 file changed, 15 insertions(+) create mode 100644 pages/vi/about/network.mdx diff --git a/pages/vi/about/network.mdx b/pages/vi/about/network.mdx new file mode 100644 index 000000000000..b19f08d12bc7 --- /dev/null +++ b/pages/vi/about/network.mdx @@ -0,0 +1,15 @@ +--- +title: Network Overview +--- + +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +> GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) + +## Overview + +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. + +![Token Economics](/img/Network-roles@2x.png) + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. From c36d61c97672a632a71b038b20aabf6f8e9c012e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:18 -0500 Subject: [PATCH 156/432] New translations publish-subgraph.mdx (Vietnamese) --- pages/vi/developer/publish-subgraph.mdx | 27 +++++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 pages/vi/developer/publish-subgraph.mdx diff --git a/pages/vi/developer/publish-subgraph.mdx b/pages/vi/developer/publish-subgraph.mdx new file mode 100644 index 000000000000..2f35f5eb1bae --- /dev/null +++ b/pages/vi/developer/publish-subgraph.mdx @@ -0,0 +1,27 @@ +--- +title: Publish a Subgraph to the Decentralized Network +--- + +Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. + +Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. + +For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). + +### Networks + +The decentralized network currently supports both Rinkeby and Ethereum Mainnet. + +### Publishing a subgraph + +Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). + +- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. + +- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. + +- When publishing a new version for an existing subgraph the same rules apply as above. + +### Updating metadata for a published subgraph + +Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. From fd84e2e372046f6844cbdeda56ec51dfd3000819 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:19 -0500 Subject: [PATCH 157/432] New translations matchstick.mdx (Vietnamese) --- pages/vi/developer/matchstick.mdx | 267 ++++++++++++++++++++++++++++++ 1 file changed, 267 insertions(+) create mode 100644 pages/vi/developer/matchstick.mdx diff --git a/pages/vi/developer/matchstick.mdx b/pages/vi/developer/matchstick.mdx new file mode 100644 index 000000000000..3cf1ec761bb9 --- /dev/null +++ b/pages/vi/developer/matchstick.mdx @@ -0,0 +1,267 @@ +--- +title: Unit Testing Framework +--- + +Matchstick is a unit testing framework, developed by [LimeChain](https://limechain.tech/), that enables subgraph developers to test their mapping logic in a sandboxed environment and deploy their subgraphs with confidence! + +Follow the [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) to install. Now, you can move on to writing your first unit test. + +## Write a Unit Test + +Let's see how a simple unit test would look like, using the Gravatar [Example Subgraph](https://github.com/graphprotocol/example-subgraph). + +Assuming we have the following handler function (along with two helper functions to make our life easier): + +```javascript +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleNewGravatars(events: NewGravatar[]): void { + events.forEach((event) => { + handleNewGravatar(event) + }) +} + +export function createNewGravatarEvent( + id: i32, + ownerAddress: string, + displayName: string, + imageUrl: string +): NewGravatar { + let mockEvent = newMockEvent() + let newGravatarEvent = new NewGravatar( + mockEvent.address, + mockEvent.logIndex, + mockEvent.transactionLogIndex, + mockEvent.logType, + mockEvent.block, + mockEvent.transaction, + mockEvent.parameters + ) + newGravatarEvent.parameters = new Array() + let idParam = new ethereum.EventParam('id', ethereum.Value.fromI32(id)) + let addressParam = new ethereum.EventParam( + 'ownderAddress', + ethereum.Value.fromAddress(Address.fromString(ownerAddress)) + ) + let displayNameParam = new ethereum.EventParam('displayName', ethereum.Value.fromString(displayName)) + let imageUrlParam = new ethereum.EventParam('imageUrl', ethereum.Value.fromString(imageUrl)) + + newGravatarEvent.parameters.push(idParam) + newGravatarEvent.parameters.push(addressParam) + newGravatarEvent.parameters.push(displayNameParam) + newGravatarEvent.parameters.push(imageUrlParam) + + return newGravatarEvent +} +``` + +We first have to create a test file in our project. We have chosen the name `gravity.test.ts`. In the newly created file we need to define a function named `runTests()`. It is important that the function has that exact name. This is an example of how our tests might look like: + +```typescript +import { clearStore, test, assert } from 'matchstick-as/assembly/index' +import { Gravatar } from '../../generated/schema' +import { NewGravatar } from '../../generated/Gravity/Gravity' +import { createNewGravatarEvent, handleNewGravatars } from '../mappings/gravity' + +export function runTests(): void { + test('Can call mappings with custom events', () => { + // Initialise + let gravatar = new Gravatar('gravatarId0') + gravatar.save() + + // Call mappings + let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + + let anotherGravatarEvent = createNewGravatarEvent(3546, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + + handleNewGravatars([newGravatarEvent, anotherGravatarEvent]) + + assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') + assert.fieldEquals('Gravatar', '12345', 'owner', '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') + assert.fieldEquals('Gravatar', '3546', 'displayName', 'cap') + + clearStore() + }) + + test('Next test', () => { + //... + }) +} +``` + +That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: + +- We're setting up our initial state and adding one custom Gravatar entity; +- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; +- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; +- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; +- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. + +There we go - we've created our first test! 👏 + +❗ **IMPORTANT:** _In order for the tests to work, we need to export the `runTests()` function in our mappings file. It won't be used there, but the export statement has to be there so that it can get picked up by Rust later when running the tests._ + +You can export the tests wrapper function in your mappings file like this: + +``` +export { runTests } from "../tests/gravity.test.ts"; +``` + +❗ **IMPORTANT:** _Currently there's an issue with using Matchstick when deploying your subgraph. Please only use Matchstick for local testing, and remove/comment out this line (`export { runTests } from "../tests/gravity.test.ts"`) once you're done. We expect to resolve this issue shortly, sorry for the inconvenience!_ + +_If you don't remove that line, you will get the following error message when attempting to deploy your subgraph:_ + +``` +/... +Mapping terminated before handling trigger: oneshot canceled +.../ +``` + +Now in order to run our tests you simply need to run the following in your subgraph root folder: + +`graph test Gravity` + +And if all goes well you should be greeted with the following: + +![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) + +## Common test scenarios + +### Hydrating the store with a certain state + +Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: + +```typescript +let gravatar = new Gravatar('entryId') +gravatar.save() +``` + +### Calling a mapping function with an event + +A user can create a custom event and pass it to a mapping function that is bound to the store: + +```typescript +import { store } from 'matchstick-as/assembly/store' +import { NewGravatar } from '../../generated/Gravity/Gravity' +import { handleNewGravatars, createNewGravatarEvent } from './mapping' + +let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + +handleNewGravatar(newGravatarEvent) +``` + +### Calling all of the mappings with event fixtures + +Users can call the mappings with test fixtures. + +```typescript +import { NewGravatar } from '../../generated/Gravity/Gravity' +import { store } from 'matchstick-as/assembly/store' +import { handleNewGravatars, createNewGravatarEvent } from './mapping' + +let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + +let anotherGravatarEvent = createNewGravatarEvent(3546, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + +handleNewGravatars([newGravatarEvent, anotherGravatarEvent]) +``` + +``` +export function handleNewGravatars(events: NewGravatar[]): void { + events.forEach(event => { + handleNewGravatar(event); + }); +} +``` + +### Mocking contract calls + +Users can mock contract calls: + +```typescript +import { addMetadata, assert, createMockedFunction, clearStore, test } from 'matchstick-as/assembly/index' +import { Gravity } from '../../generated/Gravity/Gravity' +import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' + +let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') +let expectedResult = Address.fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947') +let bigIntParam = BigInt.fromString('1234') +createMockedFunction(contractAddress, 'gravatarToOwner', 'gravatarToOwner(uint256):(address)') + .withArgs([ethereum.Value.fromSignedBigInt(bigIntParam)]) + .returns([ethereum.Value.fromAddress(Address.fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947'))]) + +let gravity = Gravity.bind(contractAddress) +let result = gravity.gravatarToOwner(bigIntParam) + +assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum.Value.fromAddress(result)) +``` + +As demonstrated, in order to mock a contract call and hardcore a return value, the user must provide a contract address, function name, function signature, an array of arguments, and of course - the return value. + +Users can also mock function reverts: + +```typescript +let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') +createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(string,string)') + .withArgs([ethereum.Value.fromAddress(contractAddress)]) + .reverts() +``` + +### Asserting the state of the store + +Users are able to assert the final (or midway) state of the store through asserting entities. In order to do this, the user has to supply an Entity type, the specific ID of an Entity, a name of a field on that Entity, and the expected value of the field. Here's a quick example: + +```typescript +import { assert } from 'matchstick-as/assembly/index' +import { Gravatar } from '../generated/schema' + +let gravatar = new Gravatar('gravatarId0') +gravatar.save() + +assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') +``` + +Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. + +### Interacting with Event metadata + +Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: + +```typescript +// Read +let logType = newGravatarEvent.logType + +// Write +let UPDATED_ADDRESS = '0xB16081F360e3847006dB660bae1c6d1b2e17eC2A' +newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) +``` + +### Asserting variable equality + +```typescript +assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); +``` + +### Asserting that an Entity is **not** in the store + +Users can assert that an entity does not exist in the store. The function takes an entity type and an id. If the entity is in fact in the store, the test will fail with a relevant error message. Here's a quick example of how to use this functionality: + +```typescript +assert.notInStore('Gravatar', '23') +``` + +### Test run time duration in the log output + +The log output includes the test run duration. Here's an example: + +`Jul 09 14:54:42.420 INFO Program execution time: 10.06022ms` + +## Feedback + +If you have any questions, feedback, feature requests or just want to reach out, the best place would be The Graph Discord where we have a dedicated channel for Matchstick, called 🔥| unit-testing. From 75d2f14c77567f90b8fe741e03a03996f784ce30 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:20 -0500 Subject: [PATCH 158/432] New translations graphql-api.mdx (Vietnamese) --- pages/vi/developer/graphql-api.mdx | 267 +++++++++++++++++++++++++++++ 1 file changed, 267 insertions(+) create mode 100644 pages/vi/developer/graphql-api.mdx diff --git a/pages/vi/developer/graphql-api.mdx b/pages/vi/developer/graphql-api.mdx new file mode 100644 index 000000000000..f9cb6214fcd9 --- /dev/null +++ b/pages/vi/developer/graphql-api.mdx @@ -0,0 +1,267 @@ +--- +title: GraphQL API +--- + +This guide explains the GraphQL Query API that is used for the Graph Protocol. + +## Queries + +In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. + +#### Examples + +Query for a single `Token` entity defined in your schema: + +```graphql +{ + token(id: "1") { + id + owner + } +} +``` + +**Note:** When querying for a single entity, the `id` field is required and it must be a string. + +Query all `Token` entities: + +```graphql +{ + tokens { + id + owner + } +} +``` + +### Sorting + +When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. + +#### Example + +```graphql +{ + tokens(orderBy: price, orderDirection: asc) { + id + owner + } +} +``` + +### Pagination + +When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. + +Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. + +Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. + +#### Example + +Query the first 10 tokens: + +```graphql +{ + tokens(first: 10) { + id + owner + } +} +``` + +To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. + +#### Example + +Query 10 `Token` entities, offset by 10 places from the beginning of the collection: + +```graphql +{ + tokens(first: 10, skip: 10) { + id + owner + } +} +``` + +#### Example + +If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: + +```graphql +{ + query manyTokens($lastID: String) { + tokens(first: 1000, where: { id_gt: $lastID }) { + id + owner + } + } +} +``` + +The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. + +### Filtering + +You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. + +#### Example + +Query challenges with `failed` outcome: + +```graphql +{ + challenges(where: { outcome: "failed" }) { + challenger + outcome + application { + id + } + } +} +``` + +You can use suffixes like `_gt`, `_lte` for value comparison: + +#### Example + +```graphql +{ + applications(where: { deposit_gt: "10000000000" }) { + id + whitelisted + deposit + } +} +``` + +Full list of parameter suffixes: + +```graphql +_not +_gt +_lt +_gte +_lte +_in +_not_in +_contains +_not_contains +_starts_with +_ends_with +_not_starts_with +_not_ends_with +``` + +Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. + +### Time-travel queries + +You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. + +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. + +Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. + +#### Example + +```graphql +{ + challenges(block: { number: 8000000 }) { + challenger + outcome + application { + id + } + } +} +``` + +This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. + +#### Example + +```graphql +{ + challenges(block: { hash: "0x5a0b54d5dc17e0aadc383d2db43b0a0d3e029c4c" }) { + challenger + outcome + application { + id + } + } +} +``` + +This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. + +### Fulltext Search Queries + +Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. + +Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. + +Fulltext search operators: + +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | + +#### Examples + +Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. + +```graphql +{ + blogSearch(text: "anarchism | crumpets") { + id + title + body + author + } +} +``` + +The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" + +```graphql +{ + blogSearch(text: "decentralized <-> philosophy") { + id + title + body + author + } +} +``` + +Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". + +```graphql +{ + blogSearch(text: "lou:* <-> music") { + id + title + body + author + } +} +``` + +## Schema + +The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). + +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. + +> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. + +### Entities + +All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. + +> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. From 087607592f6b0bd1907efe0cf9c91b0c8509b514 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:21 -0500 Subject: [PATCH 159/432] New translations distributed-systems.mdx (Vietnamese) --- pages/vi/developer/distributed-systems.mdx | 132 +++++++++++++++++++++ 1 file changed, 132 insertions(+) create mode 100644 pages/vi/developer/distributed-systems.mdx diff --git a/pages/vi/developer/distributed-systems.mdx b/pages/vi/developer/distributed-systems.mdx new file mode 100644 index 000000000000..894fcbe2e18b --- /dev/null +++ b/pages/vi/developer/distributed-systems.mdx @@ -0,0 +1,132 @@ +--- +title: Distributed Systems +--- + +The Graph is a protocol implemented as a distributed system. + +Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. + +Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. + +1. Indexer ingests block 8 +2. Request served to the client for block 8 +3. Indexer ingests block 9 +4. Indexer ingests block 10A +5. Request served to the client for block 10A +6. Indexer detects reorg to 10B and rolls back 10A +7. Request served to the client for block 9 +8. Indexer ingests block 10B +9. Indexer ingests block 11 +10. Request served to the client for block 11 + +From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. + +From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. + +It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. + +Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. + +## Polling for updated data + +The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. + +We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: + +```javascript +/// Updates the protocol.paused variable to the latest +/// known value in a loop by fetching it using The Graph. +async function updateProtocolPaused() { + // It's ok to start with minBlock at 0. The query will be served + // using the latest block available. Setting minBlock to 0 is the + // same as leaving out that argument. + let minBlock = 0 + + for (;;) { + // Schedule a promise that will be ready once + // the next Ethereum block will likely be available. + const nextBlock = new Promise((f) => { + setTimeout(f, 14000) + }) + + const query = ` + { + protocol(block: { number_gte: ${minBlock} } id: "0") { + paused + } + _meta { + block { + number + } + } + }` + + const response = await graphql(query) + minBlock = response._meta.block.number + + // TODO: Do something with the response data here instead of logging it. + console.log(response.protocol.paused) + + // Sleep to wait for the next block + await nextBlock + } +} +``` + +## Fetching a set of related items + +Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. + +Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. + +```javascript +/// Gets a list of domain names from a single block using pagination +async function getDomainNames() { + // Set a cap on the maximum number of items to pull. + let pages = 5 + const perPage = 1000 + + // The first query will get the first page of results and also get the block + // hash so that the remainder of the queries are consistent with the first. + let query = ` + { + domains(first: ${perPage}) { + name + id + } + _meta { + block { + hash + } + } + }` + + let data = await graphql(query) + let result = data.domains.map((d) => d.name) + let blockHash = data._meta.block.hash + + // Continue fetching additional pages until either we run into the limit of + // 5 pages total (specified above) or we know we have reached the last page + // because the page has fewer entities than a full page. + while (data.domains.length == perPage && --pages) { + let lastID = data.domains[data.domains.length - 1].id + query = ` + { + domains(first: ${perPage}, where: { id_gt: "${lastID}" }, block: { hash: "${blockHash}" }) { + name + id + } + }` + + data = await graphql(query) + + // Accumulate domain names into the result + for (domain of data.domains) { + result.push(domain.name) + } + } + return result +} +``` + +Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. From 61cab09ef671d2db12d4500785c5fe0428da4757 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:23 -0500 Subject: [PATCH 160/432] New translations developer-faq.mdx (Vietnamese) --- pages/vi/developer/developer-faq.mdx | 172 +++++++++++++++++++++++++++ 1 file changed, 172 insertions(+) create mode 100644 pages/vi/developer/developer-faq.mdx diff --git a/pages/vi/developer/developer-faq.mdx b/pages/vi/developer/developer-faq.mdx new file mode 100644 index 000000000000..41449c60e5ab --- /dev/null +++ b/pages/vi/developer/developer-faq.mdx @@ -0,0 +1,172 @@ +--- +title: Developer FAQs +--- + +### 1. Can I delete my subgraph? + +It is not possible to delete subgraphs once they are created. + +### 2. Can I change my subgraph name? + +No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. + +### 3. Can I change the GitHub account associated with my subgraph? + +No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. + +### 4. Am I still able to create a subgraph if my smart contracts don't have events? + +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. + +If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. + +### 5. Is it possible to deploy one subgraph with the same name for multiple networks? + +You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) + +### 6. How are templates different from data sources? + +Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. + +Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). + +### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? + +You can run the following command: + +```sh +docker pull graphprotocol/graph-node:latest +``` + +**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. + +### 8. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). + +### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? + +Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. + +### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? + +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-cli](https://github.com/graphprotocol/graph-cli) +- [graph-ts](https://github.com/graphprotocol/graph-ts) + +### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? + +If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. + +### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? + +Yes. You can do this by importing `graph-ts` as per the example below: + +```javascript +import { dataSource } from '@graphprotocol/graph-ts' + +dataSource.network() +dataSource.address() +``` + +### 14. Do you support block and call handlers on Rinkeby? + +On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. + +### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 16. Is it possible to specifying what block to start indexing on? + +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks + +### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. + +Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) + +### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? + +Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: + +```sh +curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql +``` + +### 19. What networks are supported by The Graph? + +The graph-node supports any EVM-compatible JSON RPC API chain. + +The Graph Network supports subgraphs indexing mainnet Ethereum: + +- `mainnet` + +In the Hosted Service, the following networks are supported: + +- Ethereum mainnet +- Kovan +- Rinkeby +- Ropsten +- Goerli +- PoA-Core +- PoA-Sokol +- xDAI +- NEAR +- NEAR testnet +- Matic +- Mumbai +- Fantom +- Binance Smart Chain +- Clover +- Avalanche +- Fuji +- Celo +- Celo-Alfajores +- Fuse +- Moonbeam +- Arbitrum One +- Arbitrum Testnet (on Rinkeby) +- Optimism +- Optimism Testnet (on Kovan) + +There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). + +### 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? + +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 21. Is this possible to use Apollo Federation on top of graph-node? + +Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. + +### 22. Is there a limit to how many objects The Graph can return per query? + +By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: + +```graphql +someCollection(first: 1000, skip: ) { ... } +``` + +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? + +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a host name, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. + +### 24. Where do I go to find my current subgraph on the Hosted Service? + +Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) + +### 25. Will the Hosted Service start charging query fees? + +The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. + +### 26. When will the Hosted Service be shut down? + +If and when there are plans to do this, the community will be notified well ahead of time with considerations made for any subgraphs built on the Hosted Service. + +### 27. How do I upgrade a subgraph on mainnet? + +If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. From d3246ab8f2b1f02d5b10a646f8278e6adc1814ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:24 -0500 Subject: [PATCH 161/432] New translations deprecating-a-subgraph.mdx (Vietnamese) --- pages/vi/developer/deprecating-a-subgraph.mdx | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 pages/vi/developer/deprecating-a-subgraph.mdx diff --git a/pages/vi/developer/deprecating-a-subgraph.mdx b/pages/vi/developer/deprecating-a-subgraph.mdx new file mode 100644 index 000000000000..f8966e025c13 --- /dev/null +++ b/pages/vi/developer/deprecating-a-subgraph.mdx @@ -0,0 +1,17 @@ +--- +title: Deprecating a Subgraph +--- + +So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: + +1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) +2. Call 'deprecateSubgraph' with your own address as the first parameter +3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. +4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` +5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: + +- Curators will not be able to signal on the subgraph anymore +- Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price +- Deprecated subgraphs will be indicated with an error message. + +If you interacted with the now deprecated subgraph, you'll be able to find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab respectively. From 7bc6a100b1554364c34e3779571e298448422dab Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:25 -0500 Subject: [PATCH 162/432] New translations define-subgraph-hosted.mdx (Vietnamese) --- pages/vi/developer/define-subgraph-hosted.mdx | 35 +++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 pages/vi/developer/define-subgraph-hosted.mdx diff --git a/pages/vi/developer/define-subgraph-hosted.mdx b/pages/vi/developer/define-subgraph-hosted.mdx new file mode 100644 index 000000000000..92bf5bd8cd2f --- /dev/null +++ b/pages/vi/developer/define-subgraph-hosted.mdx @@ -0,0 +1,35 @@ +--- +title: Define a Subgraph +--- + +A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. + +![Define a Subgraph](/img/define-subgraph.png) + +The subgraph definition consists of a few files: + +- `subgraph.yaml`: a YAML file containing the subgraph manifest + +- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL + +- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) + +Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) which you will need to build and deploy a subgraph. + +## Install the Graph CLI + +The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. + +Once you have `yarn`, install the Graph CLI by running + +**Install with yarn:** + +```bash +yarn global add @graphprotocol/graph-cli +``` + +**Install with npm:** + +```bash +npm install -g @graphprotocol/graph-cli +``` From 35fb120dd61eb552da6d980de577886eacd79a1e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:26 -0500 Subject: [PATCH 163/432] New translations create-subgraph-hosted.mdx (Vietnamese) --- pages/vi/developer/create-subgraph-hosted.mdx | 928 ++++++++++++++++++ 1 file changed, 928 insertions(+) create mode 100644 pages/vi/developer/create-subgraph-hosted.mdx diff --git a/pages/vi/developer/create-subgraph-hosted.mdx b/pages/vi/developer/create-subgraph-hosted.mdx new file mode 100644 index 000000000000..43c18e98693c --- /dev/null +++ b/pages/vi/developer/create-subgraph-hosted.mdx @@ -0,0 +1,928 @@ +--- +title: Create a Subgraph +--- + +Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. + +The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. + +## Mạng lưới được hỗ trợ + +The Graph Network supports subgraphs indexing mainnet Ethereum: + +- `mainnet` + +**Additional Networks are supported in beta on the Hosted Service**: + +- `mainnet` +- `kovan` +- `rinkeby` +- `ropsten` +- `goerli` +- `poa-core` +- `poa-sokol` +- `xdai` +- `near-mainnet` +- `near-testnet` +- `matic` +- `mumbai` +- `fantom` +- `bsc` +- `chapel` +- `clover` +- `avalanche` +- `fuji` +- `celo` +- `celo-alfajores` +- `fuse` +- `moonriver` +- `mbase` +- `arbitrum-one` +- `arbitrum-rinkeby` +- `optimism` +- `optimism-kovan` +- `aurora` +- `aurora-testnet` + +The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. + +Remember that you will **not be able** to publish a subgraph that indexes a non-mainnet network to the decentralized Graph Network in [Subgraph Studio](/studio/subgraph-studio). + +## From An Existing Contract + +The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. + +```sh +graph init \ + --product subgraph-studio + --from-contract \ + [--network ] \ + [--abi ] \ + [] +``` + +The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. + +## From An Example Subgraph + +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: + +``` +graph init --studio +``` + +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. + +## The Subgraph Manifest + +The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph, `subgraph.yaml` is: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +repository: https://github.com/graphprotocol/example-subgraph +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Gravity + startBlock: 6175244 + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + - event: UpdatedGravatar(uint256,address,string,string) + handler: handleUpdatedGravatar + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar + blockHandlers: + - function: handleBlock + - function: handleBlockWithCall + filter: + kind: call + file: ./src/mapping.ts +``` + +The important entries to update for the manifest are: + +- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. + +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. + +- `features`: a list of all used [feature](#experimental-features) names. + +- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. + +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. + +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. + +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. + +- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. + +- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. + +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. + +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. + +The triggers for a data source within a block are ordered using the following process: + +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers with in the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. + +These ordering rules are subject to change. + +### Getting The ABIs + +The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: + +- If you are building your own project, you will likely have access to your most current ABIs. +- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or using solc to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. + +## The GraphQL Schema + +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. + +## Defining Entities + +Before defining entities, it is important to take a step back and think about how your data is structured and linked. All queries will be made against the data model defined in the subgraph schema and the entities indexed by the subgraph. Because of this, it is good to define the subgraph schema in a way that matches the needs of your dapp. It may be useful to imagine entities as "objects containing data", rather than as events or functions. + +With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. + +### Good Example + +The `Gravatar` entity below is structured around a Gravatar object and is a good example of how an entity could be defined. + +```graphql +type Gravatar @entity { + id: ID! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} +``` + +### Bad Example + +The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. + +```graphql +type GravatarAccepted @entity { + id: ID! + owner: Bytes + displayName: String + imageUrl: String +} + +type GravatarDeclined @entity { + id: ID! + owner: Bytes + displayName: String + imageUrl: String +} +``` + +### Optional and Required Fields + +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: + +``` +Null value resolved for non-null field 'name' +``` + +Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. + +### Built-In Scalar Types + +#### GraphQL Supported Scalars + +We support the following scalars in our GraphQL API: + +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | + +#### Enums + +You can also create enums within a schema. Enums have the following syntax: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: + +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). + +#### Entity Relationships + +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. + +Relationships are defined on entities just like any other field except that the type specified is that of another entity. + +#### One-To-One Relationships + +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: + +```graphql +type Transaction @entity { + id: ID! + transactionReceipt: TransactionReceipt +} + +type TransactionReceipt @entity { + id: ID! + transaction: Transaction +} +``` + +#### One-To-Many Relationships + +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: + +```graphql +type Token @entity { + id: ID! +} + +type TokenBalance @entity { + id: ID! + amount: Int! + token: Token! +} +``` + +#### Reverse Lookups + +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. + +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. + +#### Example + +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: + +```graphql +type Token @entity { + id: ID! + tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") +} + +type TokenBalance @entity { + id: ID! + amount: Int! + token: Token! +} +``` + +#### Many-To-Many Relationships + +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. + +#### Example + +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. + +```graphql +type Organization @entity { + id: ID! + name: String! + members: [User!]! +} + +type User @entity { + id: ID! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} +``` + +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like + +```graphql +type Organization @entity { + id: ID! + name: String! + members: [UserOrganization]! @derivedFrom(field: "user") +} + +type User @entity { + id: ID! + name: String! + organizations: [UserOrganization!] @derivedFrom(field: "organization") +} + +type UserOrganization @entity { + id: ID! # Set to `${user.id}-${organization.id}` + user: User! + organization: Organization! +} +``` + +This approach requires that queries descend into one additional level to retrieve, for example, the organizations for users: + +```graphql +query usersWithOrganizations { + users { + organizations { + # this is a UserOrganization entity + organization { + name + } + } + } +} +``` + +This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. + +#### Adding comments to the schema + +As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: + +```graphql +type MyFirstEntity @entity { + "unique identifier and primary key of the entity" + id: ID! + address: Bytes! +} +``` + +## Defining Fulltext Search Fields + +Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. + +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. + +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. + +```graphql +type _Schema_ + @fulltext( + name: "bandSearch" + language: en + algorithm: rank + include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + ) + +type Band @entity { + id: ID! + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! +} +``` + +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. + +```graphql +query { + bandSearch(text: "breaks & electro & detroit") { + id + name + description + wallet + } +} +``` + +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. + +### Languages supported + +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". + +Supported language dictionaries: + +| Code | Dictionary | +| ------ | ---------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portugese | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | + +### Ranking Algorithms + +Supported algorithms for ordering results: + +| Algorithm | Description | +| ------------- | ----------------------------------------------------------------------- | +| rank | Use the match quality (0-1) of the fulltext query to order the results. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | + +## Writing Mappings + +The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. + +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. + +In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: + +```javascript +import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' +import { Gravatar } from '../generated/schema' + +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let id = event.params.id.toHex() + let gravatar = Gravatar.load(id) + if (gravatar == null) { + gravatar = new Gravatar(id) + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. + +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. + +### Recommended IDs for Creating New Entities + +Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. + +- `event.params.id.toHex()` +- `event.transaction.from.toHex()` +- `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` + +We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. + +## Code Generation + +In order to make working smart contracts, events and entities easy and type-safe, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. + +This is done with + +```sh +graph codegen [--output-dir ] [] +``` + +but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: + +```sh +# Yarn +yarn codegen + +# NPM +npm run codegen +``` + +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with + +```javascript +import { + // The contract class: + Gravity, + // The events classes: + NewGravatar, + UpdatedGravatar, +} from '../generated/Gravity/Gravity' +``` + +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with + +```javascript +import { Gravatar } from '../generated/schema' +``` + +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. + +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. + +## Data Source Templates + +A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. + +### Data Source for the Main Contract + +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: Factory + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - Directory + abis: + - name: Factory + file: ./abis/factory.json + eventHandlers: + - event: NewExchange(address,address) + handler: handleNewExchange +``` + +### Data Source Templates for Dynamically Created Contracts + +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... +templates: + - name: Exchange + kind: ethereum/contract + network: mainnet + source: + abi: Exchange + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/exchange.ts + entities: + - Exchange + abis: + - name: Exchange + file: ./abis/exchange.json + eventHandlers: + - event: TokenPurchase(address,uint256,uint256) + handler: handleTokenPurchase + - event: EthPurchase(address,uint256,uint256) + handler: handleEthPurchase + - event: AddLiquidity(address,uint256,uint256) + handler: handleAddLiquidity + - event: RemoveLiquidity(address,uint256,uint256) + handler: handleRemoveLiquidity +``` + +### Instantiating a Data Source Template + +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + // Start indexing the exchange; `event.params.exchange` is the + // address of the new exchange contract + Exchange.create(event.params.exchange) +} +``` + +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> +> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. + +### Data Source Context + +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + let context = new DataSourceContext() + context.setString('tradingPair', event.params.tradingPair) + Exchange.createWithContext(event.params.exchange, context) +} +``` + +Inside a mapping of the `Exchange` template, the context can then be accessed: + +```typescript +import { dataSource } from '@graphprotocol/graph-ts' + +let context = dataSource.context() +let tradingPair = context.getString('tradingPair') +``` + +There are setters and getters like `setString` and `getString` for all value types. + +## Start Blocks + +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. + +```yaml +dataSources: + - kind: ethereum/contract + name: ExampleSource + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: ExampleContract + startBlock: 6627917 + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - User + abis: + - name: ExampleContract + file: ./abis/ExampleContract.json + eventHandlers: + - event: NewEvent(address,address) + handler: handleNewEvent +``` + +> **Note:** The contract creation block can be quickly looked up on Etherscan: +> +> 1. Search for the contract by entering its address in the search bar. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. Load the transaction details page where you'll find the start block for that contract. + +## Call Handlers + +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. + +Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. + +> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. + +### Defining a Call Handler + +To define a call handler in your manifest simply add a `callHandlers` array under the data source you would like to subscribe to. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar +``` + +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. + +### Mapping Function + +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: + +```typescript +import { CreateGravatarCall } from '../generated/Gravity/Gravity' +import { Transaction } from '../generated/schema' + +export function handleCreateGravatar(call: CreateGravatarCall): void { + let id = call.transaction.hash.toHex() + let transaction = new Transaction(id) + transaction.displayName = call.inputs._displayName + transaction.imageUrl = call.inputs._imageUrl + transaction.save() +} +``` + +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. + +## Block Handlers + +In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. + +### Supported Filters + +```yaml +filter: + kind: call +``` + +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ + +The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.6 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCallToContract + filter: + kind: call +``` + +### Mapping Function + +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +export function handleBlock(block: ethereum.Block): void { + let id = block.hash.toHex() + let entity = new Block(id) + entity.save() +} +``` + +## Anonymous Events + +If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: + +```yaml +eventHandlers: + - event: LogNote(bytes4,address,bytes32,bytes32,uint256,bytes) + topic0: '0xbaa8529c00000000000000000000000000000000000000000000000000000000' + handler: handleGive +``` + +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. + +## Experimental features + +Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: + +| Feature | Name | +| --------------------------------------------------------- | ------------------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| [IPFS on Ethereum Contracts](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | + +For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - fullTextSearch + - nonFatalErrors +dataSources: ... +``` + +Note that using a feature without declaring it will incur in a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. + +### IPFS on Ethereum Contracts + +A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. + +Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). + +> **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. + +In order to make this easy for subgraph developers, The Graph team wrote a tool for transfering files from one IPFS node to another, called [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). + +> **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. + +### Non-fatal errors + +Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. + +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. + +Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: + +```yaml +specVersion: 0.0.4 +description: Gravatar for Ethereum +features: + - fullTextSearch + ... +``` + +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: + +```graphql +foos(first: 100, subgraphError: allow) { + id +} + +_meta { + hasIndexingErrors +} +``` + +If the subgraph encounters an error that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: + +```graphql +"data": { + "foos": [ + { + "id": "fooId" + } + ], + "_meta": { + "hasIndexingErrors": true + } +}, +"errors": [ + { + "message": "indexing_error" + } +] +``` + +### Grafting onto Existing Subgraphs + +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. + +> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. + +A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: + +```yaml +description: ... +graft: + base: Qm... # Subgraph ID of base subgraph + block: 7345624 # Block number +``` + +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. + +Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. + +The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. From 86d38847d0b7207465e00b825763c75b1be9040f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:27 -0500 Subject: [PATCH 164/432] New translations assemblyscript-migration-guide.mdx (Vietnamese) --- .../assemblyscript-migration-guide.mdx | 484 ++++++++++++++++++ 1 file changed, 484 insertions(+) create mode 100644 pages/vi/developer/assemblyscript-migration-guide.mdx diff --git a/pages/vi/developer/assemblyscript-migration-guide.mdx b/pages/vi/developer/assemblyscript-migration-guide.mdx new file mode 100644 index 000000000000..2db90a608110 --- /dev/null +++ b/pages/vi/developer/assemblyscript-migration-guide.mdx @@ -0,0 +1,484 @@ +--- +title: AssemblyScript Migration Guide +--- + +Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 + +That will enable subgraph developers to use newer features of the AS language and standard library. + +This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 + +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. + +## Features + +### New functionality + +- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) +- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) + +### Optimizations + +- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) + +### Other + +- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) + +## How to upgrade? + +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: + +```yaml +... +dataSources: + ... + mapping: + ... + apiVersion: 0.0.6 + ... +``` + +2. Update the `graph-cli` you're using to the `latest` version by running: + +```bash +# if you have it globally installed +npm install --global @graphprotocol/graph-cli@latest + +# or in your subgraph if you have it as a dev dependency +npm install --save-dev @graphprotocol/graph-cli@latest +``` + +3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: + +```bash +npm install --save @graphprotocol/graph-ts@latest +``` + +4. Follow the rest of the guide to fix the language breaking changes. +5. Run `codegen` and `deploy` again. + +## Breaking changes + +### Nullability + +On the older version of AssemblyScript, you could create code like this: + +```typescript +function load(): Value | null { ... } + +let maybeValue = load(); +maybeValue.aMethod(); +``` + +However on the newer version, because the value is nullable, it requires you to check, like this: + +```typescript +let maybeValue = load() + +if (maybeValue) { + maybeValue.aMethod() // `maybeValue` is not null anymore +} +``` + +Or force it like this: + +```typescript +let maybeValue = load()! // breaks in runtime if value is null + +maybeValue.aMethod() +``` + +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. + +### Variable Shadowing + +Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: + +```typescript +let a = 10 +let b = 20 +let a = a + b +``` + +However now this isn't possible anymore, and the compiler returns this error: + +```typescript +ERROR TS2451: Cannot redeclare block-scoped variable 'a' + + let a = a + b; + ~~~~~~~~~~~~~ +in assembly/index.ts(4,3) +``` +You'll need to rename your duplicate variables if you had variable shadowing. +### Null Comparisons +By doing the upgrade on your subgraph, sometimes you might get errors like these: + +```typescript +ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. + if (decimals == null) { + ~~~~ + in src/mappings/file.ts(41,21) +``` +To solve you can simply change the `if` statement to something like this: + +```typescript + if (!decimals) { + + // or + + if (decimals === null) { +``` + +The same applies if you're doing != instead of ==. + +### Casting + +The common way to do casting before was to just use the `as` keyword, like this: + +```typescript +let byteArray = new ByteArray(10) +let uint8Array = byteArray as Uint8Array // equivalent to: byteArray +``` + +However this only works in two scenarios: + +- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Upcasting on class inheritance (subclass → superclass) + +Examples: + +```typescript +// primitive casting +let a: usize = 10 +let b: isize = 5 +let c: usize = a + (b as usize) +``` + +```typescript +// upcasting on class inheritance +class Bytes extends Uint8Array {} + +let bytes = new Bytes(2) < Uint8Array > bytes // same as: bytes as Uint8Array +``` + +There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: + +- Downcasting on class inheritance (superclass → subclass) +- Between two types that share a superclass + +```typescript +// downcasting on class inheritance +class Bytes extends Uint8Array {} + +let uint8Array = new Uint8Array(2) < Bytes > uint8Array // breaks in runtime :( +``` + +```typescript +// between two types that share a superclass +class Bytes extends Uint8Array {} +class ByteArray extends Uint8Array {} + +let bytes = new Bytes(2) < ByteArray > bytes // breaks in runtime :( +``` + +For those cases, you can use the `changetype` function: + +```typescript +// downcasting on class inheritance +class Bytes extends Uint8Array {} + +let uint8Array = new Uint8Array(2) +changetype(uint8Array) // works :) +``` + +```typescript +// between two types that share a superclass +class Bytes extends Uint8Array {} +class ByteArray extends Uint8Array {} + +let bytes = new Bytes(2) +changetype(bytes) // works :) +``` + +If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. + +```typescript +// remove nullability +let previousBalance = AccountBalance.load(balanceId) // AccountBalance | null + +if (previousBalance != null) { + return previousBalance as AccountBalance // safe remove null +} + +let newBalance = new AccountBalance(balanceId) +``` + +For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 + +Also we've added a few more static methods in some types to ease casting, they are: + +- Bytes.fromByteArray +- Bytes.fromUint8Array +- BigInt.fromByteArray +- ByteArray.fromBigInt + +### Nullability check with property access + +To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: + +```typescript +let something: string | null = 'data' + +let somethingOrElse = something ? something : 'else' + +// or + +let somethingOrElse + +if (something) { + somethingOrElse = something +} else { + somethingOrElse = 'else' +} +``` + +However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: + +```typescript +class Container { + data: string | null +} + +let container = new Container() +container.data = 'data' + +let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile +``` + +Which outputs this error: + +```typescript +ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. + + let somethingOrElse: string = container.data ? container.data : "else"; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +``` +To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: + +```typescript +class Container { + data: string | null +} + +let container = new Container() +container.data = 'data' + +let data = container.data + +let somethingOrElse: string = data ? data : 'else' // compiles just fine :) +``` + +### Operator overloading with property access + +If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. + +```typescript +class BigInt extends Uint8Array { + @operator('+') + plus(other: BigInt): BigInt { + // ... + } +} + +class Wrapper { + public constructor(public n: BigInt | null) {} +} + +let x = BigInt.fromI32(2) +let y: BigInt | null = null + +x + y // give compile time error about nullability + +let wrapper = new Wrapper(y) + +wrapper.n = wrapper.n + x // doesn't give compile time errors as it should +``` + +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. + +```typescript +let wrapper = new Wrapper(y) + +if (!wrapper.n) { + wrapper.n = BigInt.fromI32(0) +} + +wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt +``` + +### Value initialization + +If you have any code like this: + +```typescript +var value: Type // null +value.x = 10 +value.y = 'content' +``` + +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: + +```typescript +var value = new Type() // initialized +value.x = 10 +value.y = 'content' +``` + +Also if you have nullable properties in a GraphQL entity, like this: + +```graphql +type Total @entity { + id: ID! + amount: BigInt +} +``` + +And you have code similar to this: + +```typescript +let total = Total.load('latest') + +if (total === null) { + total = new Total('latest') +} + +total.amount = total.amount + BigInt.fromI32(1) +``` + +You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: + +```typescript +let total = Total.load('latest') + +if (total === null) { + total = new Total('latest') + total.amount = BigInt.fromI32(0) +} + +total.tokens = total.tokens + BigInt.fromI32(1) +``` + +Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 + +```graphql +type Total @entity { + id: ID! + amount: BigInt! +} +``` + +```typescript +let total = Total.load('latest') + +if (total === null) { + total = new Total('latest') // already initializes non-nullable properties +} + +total.amount = total.amount + BigInt.fromI32(1) +``` + +### Class property initialization + +If you export any classes with properties that are other classes (declared by you or by the standard library) like this: + +```typescript +class Thing {} + +export class Something { + value: Thing +} +``` + +The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: + +```typescript +export class Something { + constructor(public value: Thing) {} +} + +// or + +export class Something { + value: Thing + + constructor(value: Thing) { + this.value = value + } +} + +// or + +export class Something { + value!: Thing +} +``` + +### GraphQL schema + +This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. + +Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: + +```graphql +type Something @entity { + id: ID! +} + +type MyEntity @entity { + id: ID! + invalidField: [Something]! # no longer valid +} +``` + +You'll have to add an `!` to the member of the List type, like this: + +```graphql +type Something @entity { + id: ID! +} + +type MyEntity @entity { + id: ID! + invalidField: [Something!]! # valid +} +``` + +This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). + +### Other + +- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) From d1c48107e6c687f71f45129ced7cca7f36710c81 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:30 -0500 Subject: [PATCH 165/432] New translations assemblyscript-api.mdx (Vietnamese) --- pages/vi/developer/assemblyscript-api.mdx | 714 ++++++++++++++++++++++ 1 file changed, 714 insertions(+) create mode 100644 pages/vi/developer/assemblyscript-api.mdx diff --git a/pages/vi/developer/assemblyscript-api.mdx b/pages/vi/developer/assemblyscript-api.mdx new file mode 100644 index 000000000000..a609e6cd657f --- /dev/null +++ b/pages/vi/developer/assemblyscript-api.mdx @@ -0,0 +1,714 @@ +--- +title: AssemblyScript API +--- + +> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) + +This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: + +- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and +- code generated from subgraph files by `graph codegen`. + +It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. + +## Installation + +Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: + +```sh +yarn install # Yarn +npm install # NPM +``` + +If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: + +```sh +yarn add --dev @graphprotocol/graph-ts # Yarn +npm install --save-dev @graphprotocol/graph-ts # NPM +``` + +## API Reference + +The `@graphprotocol/graph-ts` library provides the following APIs: + +- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. +- A `store` API to load and save entities from and to the Graph Node store. +- A `log` API to log messages to the Graph Node output and the Graph Explorer. +- An `ipfs` API to load files from IPFS. +- A `json` API to parse JSON data. +- A `crypto` API to use cryptographic functions. +- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. + +### Versions + +The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. + +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | + +### Built-in Types + +Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). + +The following additional types are provided by `@graphprotocol/graph-ts`. + +#### ByteArray + +```typescript +import { ByteArray } from '@graphprotocol/graph-ts' +``` + +`ByteArray` represents an array of `u8`. + +_Construction_ + +- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. +- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. + +_Type conversions_ + +- `toHexString(): string` - Converts to a hex string prefixed with `0x`. +- `toString(): string` - Interprets the bytes as a UTF-8 string. +- `toBase58(): string` - Encodes the bytes into a base58 string. +- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. +- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. + +_Operators_ + +- `equals(y: ByteArray): bool` – can be written as `x == y`. + +#### BigDecimal + +```typescript +import { BigDecimal } from '@graphprotocol/graph-ts' +``` + +`BigDecimal` is used to represent arbitrary precision decimals. + +_Construction_ + +- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. +- `static fromString(s: string): BigDecimal` – parses from a decimal string. + +_Type conversions_ + +- `toString(): string` – prints to a decimal string. + +_Math_ + +- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. +- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. +- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. +- `dividedBy(y: BigDecimal): BigDecimal` – can be written as `x / y`. +- `equals(y: BigDecimal): bool` – can be written as `x == y`. +- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. +- `lt(y: BigDecimal): bool` – can be written as `x < y`. +- `le(y: BigDecimal): bool` – can be written as `x <= y`. +- `gt(y: BigDecimal): bool` – can be written as `x > y`. +- `ge(y: BigDecimal): bool` – can be written as `x >= y`. +- `neg(): BigDecimal` - can be written as `-x`. + +#### BigInt + +```typescript +import { BigInt } from '@graphprotocol/graph-ts' +``` + +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. + +The `BigInt` class has the following API: + +_Construction_ + +- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. + + _Type conversions_ + +- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. +- `x.toString(): string` – turns `BigInt` into a decimal number string. +- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. +- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. + +_Math_ + +- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. +- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. +- `x.times(y: BigInt): BigInt` – can be written as `x * y`. +- `x.dividedBy(y: BigInt): BigInt` – can be written as `x / y`. +- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. +- `x.equals(y: BigInt): bool` – can be written as `x == y`. +- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. +- `x.lt(y: BigInt): bool` – can be written as `x < y`. +- `x.le(y: BigInt): bool` – can be written as `x <= y`. +- `x.gt(y: BigInt): bool` – can be written as `x > y`. +- `x.ge(y: BigInt): bool` – can be written as `x >= y`. +- `x.neg(): BigInt` – can be written as `-x`. +- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. +- `x.isZero(): bool` – Convenience for checking if the number is zero. +- `x.isI32(): bool` – Check if the number fits in an `i32`. +- `x.abs(): BigInt` – Absolute value. +- `x.pow(exp: u8): BigInt` – Exponentiation. +- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. +- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. +- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. + +#### TypedMap + +```typescript +import { TypedMap } from '@graphprotocol/graph-ts' +``` + +`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). + +The `TypedMap` class has the following API: + +- `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` +- `map.set(key: K, value: V): void` – sets the value of `key` to `value` +- `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map +- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map +- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not + +#### Bytes + +```typescript +import { Bytes } from '@graphprotocol/graph-ts' +``` + +`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. + +The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: + +- `b.toHex()` – returns a hexadecimal string representing the bytes in the array +- `b.toString()` – converts the bytes in the array to a string of unicode characters +- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) + +#### Address + +```typescript +import { Address } from '@graphprotocol/graph-ts' +``` + +`Address` extends `Bytes` to represent Ethereum `address` values. + +It adds the following method on top of the `Bytes` API: + +- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string + +### Store API + +```typescript +import { store } from '@graphprotocol/graph-ts' +``` + +The `store` API allows to load, save and remove entities from and to the Graph Node store. + +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. + +#### Creating entities + +The following is a common pattern for creating entities from Ethereum events. + +```typescript +// Import the Transfer event class generated from the ERC20 ABI +import { Transfer as TransferEvent } from '../generated/ERC20/ERC20' + +// Import the Transfer entity type generated from the GraphQL schema +import { Transfer } from '../generated/schema' + +// Transfer event handler +export function handleTransfer(event: TransferEvent): void { + // Create a Transfer entity, using the hexadecimal string representation + // of the transaction hash as the entity ID + let id = event.transaction.hash.toHex() + let transfer = new Transfer(id) + + // Set properties on the entity, using the event parameters + transfer.from = event.params.from + transfer.to = event.params.to + transfer.amount = event.params.amount + + // Save the entity to the store + transfer.save() +} +``` + +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. + +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. + +#### Loading entities from the store + +If an entity already exists, it can be loaded from the store with the following: + +```typescript +let id = event.transaction.hash.toHex() // or however the ID is constructed +let transfer = Transfer.load(id) +if (transfer == null) { + transfer = new Transfer(id) +} + +// Use the Transfer entity as before +``` + +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. + +> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. + +#### Updating existing entities + +There are two ways to update an existing entity: + +1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. + +Changing properties is straight forward in most cases, thanks to the generated property setters: + +```typescript +let transfer = new Transfer(id) +transfer.from = ... +transfer.to = ... +transfer.amount = ... +``` + +It is also possible to unset properties with one of the following two instructions: + +```typescript +transfer.from.unset() +transfer.from = null +``` + +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. + +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. + +```typescript +// This won't work +entity.numbers.push(BigInt.fromI32(1)) +entity.save() + +// This will work +let numbers = entity.numbers +numbers.push(BigInt.fromI32(1)) +entity.numbers = numbers +entity.save() +``` + +#### Removing entities from the store + +There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: + +```typescript +import { store } from '@graphprotocol/graph-ts' +... +let id = event.transaction.hash.toHex() +store.remove('Transfer', id) +``` + +### Ethereum API + +The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. + +#### Support for Ethereum Types + +As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. + +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. + +The following example illustrates this. Given a subgraph schema like + +```graphql +type Transfer @entity { + from: Bytes! + to: Bytes! + amount: BigInt! +} +``` + +and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: + +```typescript +let id = event.transaction.hash.toHex() +let transfer = new Transfer(id) +transfer.from = event.params.from +transfer.to = event.params.to +transfer.amount = event.params.amount +transfer.save() +``` + +#### Events and Block/Transaction Data + +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): + +```typescript +class Event { + address: Address + logIndex: BigInt + transactionLogIndex: BigInt + logType: string | null + block: Block + transaction: Transaction + parameters: Array +} + +class Block { + hash: Bytes + parentHash: Bytes + unclesHash: Bytes + author: Address + stateRoot: Bytes + transactionsRoot: Bytes + receiptsRoot: Bytes + number: BigInt + gasUsed: BigInt + gasLimit: BigInt + timestamp: BigInt + difficulty: BigInt + totalDifficulty: BigInt + size: BigInt | null + baseFeePerGas: BigInt | null +} + +class Transaction { + hash: Bytes + index: BigInt + from: Address + to: Address | null + value: BigInt + gasLimit: BigInt + gasPrice: BigInt + input: Bytes + nonce: BigInt +} +``` + +#### Access to Smart Contract State + +The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. + +A common pattern is to access the contract from which an event originates. This is achieved with the following code: + +```typescript +// Import the generated contract class +import { ERC20Contract } from '../generated/ERC20Contract/ERC20Contract' +// Import the generated entity class +import { Transfer } from '../generated/schema' + +export function handleTransfer(event: Transfer) { + // Bind the contract to the address that emitted the event + let contract = ERC20Contract.bind(event.address) + + // Access state variables and functions by calling them + let erc20Symbol = contract.symbol() +} +``` + +As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. + +Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. + +#### Handling Reverted Calls + +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: + +```typescript +let gravity = Gravity.bind(event.address) +let callResult = gravity.try_gravatarToOwner(gravatar) +if (callResult.reverted) { + log.info('getGravatar reverted', []) +} else { + let owner = callResult.value +} +``` + +Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. + +#### Encoding/Decoding ABI + +Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. + +```typescript +import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' + +let tupleArray: Array = [ + ethereum.Value.fromAddress(Address.fromString('0x0000000000000000000000000000000000000420')), + ethereum.Value.fromUnsignedBigInt(BigInt.fromI32(62)), +] + +let tuple = tupleArray as ethereum.Tuple + +let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! + +let decoded = ethereum.decode('(address,uint256)', encoded) +``` + +For more information: + +- [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) +- Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) +- More [complex example](https://github.com/graphprotocol/graph-node/blob/6a7806cc465949ebb9e5b8269eeb763857797efc/tests/integration-tests/host-exports/src/mapping.ts#L72). + +### Logging API + +```typescript +import { log } from '@graphprotocol/graph-ts' +``` + +The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. + +The `log` API includes the following functions: + +- `log.debug(fmt: string, args: Array): void` - logs a debug message. +- `log.info(fmt: string, args: Array): void` - logs an informational message. +- `log.warning(fmt: string, args: Array): void` - logs a warning. +- `log.error(fmt: string, args: Array): void` - logs an error message. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. + +The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. + +```typescript +log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) +``` + +#### Logging one or more values + +##### Logging a single value + +In the example below, the string value "A" is passed into an array to become`['A']` before being logged: + +```typescript +let myValue = 'A' + +export function handleSomeEvent(event: SomeEvent): void { + // Displays : "My value is: A" + log.info('My value is: {}', [myValue]) +} +``` + +##### Logging a single entry from an existing array + +In the example below, only the first value of the argument array is logged, despite the array containing three values. + +```typescript +let myArray = ['A', 'B', 'C'] + +export function handleSomeEvent(event: SomeEvent): void { + // Displays : "My value is: A" (Even though three values are passed to `log.info`) + log.info('My value is: {}', myArray) +} +``` + +#### Logging multiple entries from an existing array + +Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. + +```typescript +let myArray = ['A', 'B', 'C'] + +export function handleSomeEvent(event: SomeEvent): void { + // Displays : "My first value is: A, second value is: B, third value is: C" + log.info('My first value is: {}, second value is: {}, third value is: {}', myArray) +} +``` + +##### Logging a specific entry from an existing array + +To display a specific value in the array, the indexed value must be provided. + +```typescript +export function handleSomeEvent(event: SomeEvent): void { + // Displays : "My third value is C" + log.info('My third value is: {}', [myArray[2]]) +} +``` + +##### Logging event information + +The example below logs the block number, block hash and transaction hash from an event: + +```typescript +import { log } from '@graphprotocol/graph-ts' + +export function handleSomeEvent(event: SomeEvent): void { + log.debug('Block number: {}, block hash: {}, transaction hash: {}', [ + event.block.number.toString(), // "47596000" + event.block.hash.toHexString(), // "0x..." + event.transaction.hash.toHexString(), // "0x..." + ]) +} +``` + +### IPFS API + +```typescript +import { ipfs } from '@graphprotocol/graph-ts' +``` + +Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. + +Given an IPFS hash or path, reading a file from IPFS is done as follows: + +```typescript +// Put this inside an event handler in the mapping +let hash = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D' +let data = ipfs.cat(hash) + +// Paths like `QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile` +// that include files in directories are also supported +let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' +let data = ipfs.cat(path) +``` + +**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. + +It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: + +```typescript +import { JSONValue, Value } from '@graphprotocol/graph-ts' + +export function processItem(value: JSONValue, userData: Value): void { + // See the JSONValue documentation for details on dealing + // with JSON values + let obj = value.toObject() + let id = obj.get('id') + let title = obj.get('title') + + if (!id || !title) { + return + } + + // Callbacks can also created entities + let newItem = new Item(id.toString()) + newItem.title = title.toString() + newitem.parent = userData.toString() // Set parent to "parentId" + newitem.save() +} + +// Put this inside an event handler in the mapping +ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) + +// Alternatively, use `ipfs.mapJSON` +ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) +``` + +The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. + +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. + +### Crypto API + +```typescript +import { crypto } from '@graphprotocol/graph-ts' +``` + +The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: + +- `crypto.keccak256(input: ByteArray): ByteArray` + +### JSON API + +```typescript +import { json, JSONValueKind } from '@graphprotocol/graph-ts' +``` + +JSON data can be parsed using the `json` API: + +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array +- `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed +- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed + +The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: + +```typescript +let value = json.fromBytes(...) +if (value.kind == JSONValueKind.BOOL) { + ... +} +``` + +In addition, there is a method to check if the value is `null`: + +- `value.isNull(): boolean` + +When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: + +- `value.toBool(): boolean` +- `value.toI64(): i64` +- `value.toF64(): f64` +- `value.toBigInt(): BigInt` +- `value.toString(): string` +- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) + +### Type Conversions Reference + +| Source(s) | Destination | Conversion function | +| -------------------- | -------------------- | ---------------------------- | +| Address | Bytes | none | +| Address | ID | s.toHexString() | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | none | +| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | none | +| int32 | i32 | none | +| int32 | BigInt | Bigint.fromI32(s) | +| uint24 | i32 | none | +| int64 - int256 | BigInt | none | +| uint32 - uint256 | BigInt | none | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | + +### Data Source Metadata + +You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: + +- `dataSource.address(): Address` +- `dataSource.network(): string` +- `dataSource.context(): DataSourceContext` + +### Entity and DataSourceContext + +The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: + +- `setString(key: string, value: string): void` +- `setI32(key: string, value: i32): void` +- `setBigInt(key: string, value: BigInt): void` +- `setBytes(key: string, value: Bytes): void` +- `setBoolean(key: string, value: bool): void` +- `setBigDecimal(key, value: BigDecimal): void` +- `getString(key: string): string` +- `getI32(key: string): i32` +- `getBigInt(key: string): BigInt` +- `getBytes(key: string): Bytes` +- `getBoolean(key: string): boolean` +- `getBigDecimal(key: string): BigDecimal` From c4f0cd8aabdc103b8f4aa05ae15f0ac1545482ae Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:31 -0500 Subject: [PATCH 166/432] New translations delegating.mdx (Vietnamese) --- pages/vi/delegating.mdx | 94 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 94 insertions(+) create mode 100644 pages/vi/delegating.mdx diff --git a/pages/vi/delegating.mdx b/pages/vi/delegating.mdx new file mode 100644 index 000000000000..a18f2be577f6 --- /dev/null +++ b/pages/vi/delegating.mdx @@ -0,0 +1,94 @@ +--- +title: Delegator +--- + +Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. + +## Delegator Guide + +This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: + +- The risks of delegating tokens in The Graph Network +- How to calculate expected returns as a delegator +- A Video guide showing the steps to delegate in the Graph Network UI + +## Delegation Risks + +Listed below are the main risks of being a delegator in the protocol. + +### The delegation fee + +It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. + +This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. + +### The delegation unbonding period + +Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. + +One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT. + +
+ ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day + unbonding period._ +
+ +### Choosing a trustworthy indexer with a fair reward payout for delegators + +This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. + +Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards. + +
+ ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The + middle one is giving delegators 20%. The bottom one is giving delegators ~83%.* +
+ +- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. + +As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. + +### Calculating delegators expected return + +A Delegator has to consider a lot of factors when determining the return. These + +- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. + +### Considering the query fee cut and indexing fee cut + +As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: + +![Delegation Image 3](/img/Delegation-Reward-Formula.png) + +### Considering the indexers delegation pool + +Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) + +Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. + +A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. + +### Considering the delegation capacity + +Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. + +Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. + +## Video guide for the network UI + +This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI. + +
+ +
From 155e0d2440075b291e2f9bb83c939b59a9690dfa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:32 -0500 Subject: [PATCH 167/432] New translations curating.mdx (Vietnamese) --- pages/vi/curating.mdx | 104 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 pages/vi/curating.mdx diff --git a/pages/vi/curating.mdx b/pages/vi/curating.mdx new file mode 100644 index 000000000000..6e1a7729217d --- /dev/null +++ b/pages/vi/curating.mdx @@ -0,0 +1,104 @@ +--- +title: Curator +--- + +Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. + +When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. + +Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) + +## Bonding Curve 101 + +First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. + +![Price per shares](/img/price-per-share.png) + +As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: + +![Bonding curve](/img/bonding-curve.png) + +Consider we have two curators that mint shares for a subgraph: + +- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. +- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. +- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. +- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. +- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. +- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. + +In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** + +In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. + +## How to Signal + +Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) + +A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. + +Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. + +Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. + +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. + +## What does Signaling mean for The Graph Network? + +For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. + +And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. + +Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! + +![Signaling diagram](/img/curator-signaling.png) + +Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). + +![Explorer subgraphs](/img/explorer-subgraphs.png) + +## Risks + +1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). +4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. + +## Curation FAQs + +### 1. What % of query fees do Curators earn? + +By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. + +### 2. How do I decide which subgraphs are high quality to signal on? + +Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: + +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. + +### 3. What’s the cost of upgrading a subgraph? + +Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. + +### 4. How often can I upgrade my subgraph? + +It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. + +### 5. Can I sell my curation shares? + +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. + +Still confused? Check out our Curation video guide below: + +
+ +
From 03d365c29a1b0c0ce3d2bf6d7935e8440394ee1a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Mon, 17 Jan 2022 12:44:33 -0500 Subject: [PATCH 168/432] New translations near.mdx (Vietnamese) --- pages/vi/supported-networks/near.mdx | 265 +++++++++++++++++++++++++++ 1 file changed, 265 insertions(+) create mode 100644 pages/vi/supported-networks/near.mdx diff --git a/pages/vi/supported-networks/near.mdx b/pages/vi/supported-networks/near.mdx new file mode 100644 index 000000000000..288ac380494c --- /dev/null +++ b/pages/vi/supported-networks/near.mdx @@ -0,0 +1,265 @@ +--- +title: Building Subgraphs on NEAR +--- + +> NEAR support in Graph Node and on the Hosted Service is in beta: please contact near@thegraph.com with any questions about building NEAR subgraphs! + +This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## What is NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralised applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. + +## What are NEAR subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process on-chain events. There are currently two types of handlers supported for NEAR subgraphs: + +- Block handlers: these are run on every new block +- Receipt handlers: run every time a message is executed at a specified account + +[From the NEAR documentation](https://docs.near.org/docs/concepts/transaction#receipt): + +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. + +## Building a NEAR Subgraph + +`@graphprotocol/graph-cli` is a command line tool for building and deploying subgraphs. + +`@graphprotocol/graph-ts` is a library of subgraph-specific types. + +NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR subgraph is very similar to building a subgraph which indexes Ethereum. + +There are three aspects of subgraph definition: + +**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/developer/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. + +During subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +``` + +### Subgraph Manifest Definition + +The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:: + +```yaml +specVersion: 0.0.2 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.5 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On the Hosted Service, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human readable ID corresponding to a [NEAR account](https://docs.near.org/docs/concepts/account). This can be an account, or a sub account. + +NEAR data sources support two types of handlers: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). + +### Schema Definition + +Schema definition describes the structure of the resulting subgraph database, and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). + +### AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developer/assemblyscript-api). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise the rest of the [AssemblyScript API](/developer/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developer/assemblyscript-api#json-api) to allow developers to easily process these logs. + +## Deploying a NEAR Subgraph + +Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +The Graph's Hosted Service currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying subgraphs on the Hosted Service can be found [here](/hosted-service/deploy-subgraph-hosted). + +As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On the Hosted Service, this can be done from [your Dashboard](https://thegraph.com/hosted-service/dashboard): "Add Subgraph". + +Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: + +``` +$ graph create --node subgraph/name # creates a subgraph on a local Graph Node (on the Hosted Service, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend where the subgraph is being deployed. + +#### Hosted Service: + +``` +graph deploy --node https://api.thegraph.com/deploy/ --ipfs https://api.thegraph.com/ipfs/ --access-token +``` + +#### Local Graph Node (based on default configuration): + +``` +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: + +``` +{ + _meta { + block { number } + } +} +``` + +### Indexing NEAR with a Local Graph Node + +Running a Graph Node that indexes NEAR has the following operational requirements: + +- NEAR Indexer Framework with Firehose instrumentation +- NEAR Firehose Component(s) +- Graph Node with Firehose endpoint configured + +We will provide more information on running the above components soon. + +## Querying a NEAR Subgraph + +The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. + +## Example Subgraphs + +Here are some example subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/example-subgraph/tree/near-blocks-example) + +[NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) + +## FAQ + +### How does the beta work? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! + +### Can a subgraph index both NEAR and EVM chains? + +No, a subgraph can only support data sources from one chain / network. + +### Can subgraphs react to more specific triggers? + +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. + +### Will receipt handlers trigger for accounts and their sub accounts? + +Receipt handlers will only be triggered for the exact-match of the named account. More flexibility may be added in future. + +### Can NEAR subgraphs make view calls to NEAR accounts during mappings? + +This is not supported. We are evaluating whether this functionality is required for indexing. + +### Can I use data source templates in my NEAR subgraph? + +This is not currently supported. We are evaluating whether this functionality is required for indexing. + +### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? + +Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR subgraphs? + +If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/developer/quick-start). Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. + +## References + +- [NEAR developer documentation](https://docs.near.org/docs/develop/basics/getting-started) From c5805a0420ea4b99eae7e20817b8ce65ff2b0207 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Tue, 18 Jan 2022 09:24:09 -0500 Subject: [PATCH 169/432] New translations introduction.mdx (Spanish) --- pages/es/about/introduction.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/pages/es/about/introduction.mdx b/pages/es/about/introduction.mdx index baa0e542240a..4a49870260dd 100644 --- a/pages/es/about/introduction.mdx +++ b/pages/es/about/introduction.mdx @@ -4,17 +4,17 @@ title: Introducción En esta página se explica qué es The Graph y cómo puedes empezar a utilizarlo. -## Qué es The Graph +## ¿Qué es The Graph? -The Graph es un protocolo descentralizado para indexar y consultar los datos de las blockchains, empezando por Ethereum. Permite consultar datos que son difíciles de consultar directamente. +The Graph es un protocolo descentralizado que permite indexar y consultar los datos de diferentes blockchains, el cual empezó por Ethereum. Permite consultar datos los cuales pueden ser difíciles de consultar directamente. Los proyectos con contratos inteligentes complejos como [Uniswap](https://uniswap.org/) y las iniciativas de NFTs como [Bored Ape Yacht Club](https://boredapeyachtclub.com/) almacenan los datos en la blockchain de Ethereum, lo que hace realmente difícil leer algo más que los datos básicos directamente desde la blockchain. -En el caso de Bored Ape Yacht Club, podemos realizar operaciones de lectura básicas en [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) como obtener el propietario de un determinado Ape, obtener el URI de contenido de un Ape con base en su ID, o el supply total, ya que estas operaciones de lectura están programadas directamente en el contrato inteligente, pero no son posibles las consultas y operaciones más avanzadas del mundo real como la agregación, la búsqueda, las relaciones y el filtrado no trivial. Por ejemplo, si quisiéramos consultar los apes que son propiedad de una determinada dirección, y filtrar por una de sus características, no podríamos obtener esa información interactuando directamente con el propio contrato. +En el caso de Bored Ape Yacht Club, podemos realizar operaciones de lecturas básicas en [su contrato](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code), para obtener el propietario de un determinado Ape, obtener el URI de un Ape en base a su ID, o el supply total, ya que estas operaciones de lectura están programadas directamente en el contrato inteligente, pero no son posibles las consultas y operaciones más avanzadas del mundo real como la adición, consultas, las relaciones y el filtrado no trivial. Por ejemplo, si quisiéramos consultar los Apes que son propiedad de una dirección en concreto, y filtrar por una de sus características, no podríamos obtener esa información interactuando directamente con el contrato. -Para obtener estos datos, tendríamos que procesar cada uno de los eventos de [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) emitidos, leer los metadatos de IPFS utilizando el ID del Token y el hash de IPFS, y luego agregarlos. Incluso para este tipo de preguntas relativamente sencillas, una aplicación descentralizada (dapp) que se ejecutara en un navegador tardaría **horas o incluso días** en obtener una respuesta. +Para obtener estos datos, tendríamos que procesar cada uno de los eventos de [`transferencia`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) que se hayan emitido, leer los metadatos de IPFS utilizando el ID del token y el hash del IPFS, con el fin de luego agregarlos. Incluso para este tipo de preguntas relativamente sencillas, una aplicación descentralizada (dapp) que se ejecutara en un navegador tardaría **horas o incluso días** en obtener una respuesta. -También podrías construir tu propio servidor, procesar las transacciones allí, guardarlas en una base de datos y construir un endpoint de la API sobre todo ello para consultar los datos. Sin embargo, esta opción requiere recursos intensivos, necesita mantenimiento, presenta un único punto de fallo y rompe importantes propiedades de seguridad necesarias para la descentralización. +También podrías construir tu propio servidor, procesar las transacciones allí, guardarlas en una base de datos y construir un endpoint de la API sobre todo ello para consultar los datos. Sin embargo, esta opción requiere recursos intensivos, necesita mantenimiento, y si llegase a presentar algún tipo de fallo podría incluso vulnerar algunos protocolos de seguridad que son necesarios para la descentralización. **Indexar los datos de la blockchain es muy, muy difícil.** From 1717f5c2e64d600f9a5418aaccb38be338d5faff Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Tue, 18 Jan 2022 10:22:57 -0500 Subject: [PATCH 170/432] New translations introduction.mdx (Spanish) --- pages/es/about/introduction.mdx | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/pages/es/about/introduction.mdx b/pages/es/about/introduction.mdx index 4a49870260dd..70290d8c3649 100644 --- a/pages/es/about/introduction.mdx +++ b/pages/es/about/introduction.mdx @@ -18,17 +18,17 @@ También podrías construir tu propio servidor, procesar las transacciones allí **Indexar los datos de la blockchain es muy, muy difícil.** -Las propiedades de la blockchain, como la finalidad, las reorganizaciones de la cadena o los bloques sin cerrar, complican aún más este proceso y hacen que no sólo se consuma tiempo, sino que sea conceptualmente difícil recuperar los resultados correctos de las consultas de los datos de la blockchain. +Las propiedades de la blockchain, su finalidad, la reorganización de la cadena o los bloques que están por cerrarse, complican aún más este proceso y hacen que no solo se consuma tiempo, sino que sea conceptualmente difícil recuperar los resultados correctos proporcionados por la blockchain. -The Graph resuelve esto con un protocolo descentralizado que indexa y permite la consulta eficiente y de alto rendimiento de los datos de la blockchain. Estas APIs ("subgrafos" indexados) pueden consultarse después con una API GraphQL estándar. Actualmente, existe un servicio alojado (hosted) y un protocolo descentralizado con las mismas capacidades. Ambos están respaldados por la implementación de código abierto de [Graph Node](https://github.com/graphprotocol/graph-node). +The Graph resuelve esto con un protocolo descentralizado que indexa y permite una consulta eficiente y de alto rendimiento para recibir los datos de la blockchain. Estas APIs ("subgrafos" indexados) pueden consultarse después con una API de GraphQL estándar. Actualmente, existe un servicio alojado (hosted) y un protocolo descentralizado con las mismas capacidades. Ambos están respaldados por la implementación de código abierto de [Graph Node](https://github.com/graphprotocol/graph-node). -## Cómo Funciona The Graph +## ¿Cómo funciona The Graph? -The Graph aprende qué y cómo indexar los datos de Ethereum basándose en las descripciones de los subgrafos, conocidas como el manifiesto de los subgrafos. La descripción del subgrafo define los contratos inteligentes de interés para un subgrafo, los eventos en esos contratos a los que prestar atención, y cómo mapear los datos de los eventos a los datos que The Graph almacenará en su base de datos. +The Graph aprende, qué y cómo indexar los datos de Ethereum, basándose en las descripciones de los subgrafos, conocidas como el manifiesto de los subgrafos. La descripción del subgrafo define los contratos inteligentes de interés para este subgrafo, los eventos en esos contratos a los que prestar atención, y cómo mapear los datos de los eventos a los datos que The Graph almacenará en su base de datos. -Una vez que has escrito el `subgraph manifest`, utilizas la CLI de The Graph para almacenar la definición en IPFS y decirle al indexador que empiece a indexar los datos de ese subgrafo. +Una vez que has escrito el `subgraph manifest`, utilizas el CLI de The Graph para almacenar la definición en IPFS y decirle al indexador que empiece a indexar los datos de ese subgrafo. -Este diagrama ofrece más detalles sobre el flujo de datos una vez que se ha desplegado un manifiesto de subgrafo, que trata de las transacciones de Ethereum: +Este diagrama ofrece más detalles sobre el flujo de datos una vez que se ha desplegado en el manifiesto para un subgrafo, que trata de las transacciones en Ethereum: ![](/img/graph-dataflow.png) @@ -36,12 +36,12 @@ El flujo sigue estos pasos: 1. Una aplicación descentralizada añade datos a Ethereum a través de una transacción en un contrato inteligente. 2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. -3. Graph Node escanea continuamente Ethereum en busca de nuevos bloques y los datos de su subgrafo que puedan contener. -4. Graph Node encuentra los eventos de Ethereum para tu subgrafo en estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. -5. La aplicación descentralizada consulta a Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacén de datos subyacente con el fin de obtener estos datos, haciendo uso de las capacidades de indexación del almacén. La aplicación descentralizada muestra estos datos en una rica interfaz de usuario para los usuarios finales, que utilizan para emitir nuevas transacciones en Ethereum. El ciclo se repite. +3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de su subgrafo que puedan contener. +4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. +5. La aplicación descentralizada consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La aplicación descentralizada muestra estos datos en una interfaz muy completa para el usuario, a fin de que los cliente que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. Y así... el ciclo se repite continuamente. -## Próximos Pasos +## Próximos puntos -En las siguientes secciones entraremos en más detalles sobre cómo definir subgrafos, cómo desplegarlos y cómo consultar los datos de los índices que construye Graph Node. +En las siguientes secciones entraremos en más detalles sobre cómo definir subgrafos, cómo desplegarlos y cómo consultar los datos de los índices que construye el Graph Node. -Antes de que empieces a escribir tu propio subgrafo, puede que quieras echar un vistazo a The Graph Explorer y explorar algunos de los subgrafos que ya han sido desplegados. La página de cada subgrafo contiene un playground que te permite consultar los datos de ese subgrafo con GraphQL. +Antes de que empieces a escribir tu propio subgrafo, es posible que debas echar un vistazo a The Graph Explorer para explorar algunos de los subgrafos que ya han sido desplegados. La página de cada subgrafo contiene un playground que te permite consultar los datos de ese subgrafo usando GraphQL. From f3ad7b19bbfe8edf71bf01279ac357c1202fa99f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Tue, 18 Jan 2022 10:22:58 -0500 Subject: [PATCH 171/432] New translations network.mdx (Spanish) --- pages/es/about/network.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/es/about/network.mdx b/pages/es/about/network.mdx index 316fd3e082f9..a81e6ef93cbb 100644 --- a/pages/es/about/network.mdx +++ b/pages/es/about/network.mdx @@ -1,15 +1,15 @@ --- -title: Visión General de la Red +title: Visión general de la red --- -The Graph Network es un protocolo de indexación descentralizado para organizar los datos de la blockchain. Las aplicaciones utilizan GraphQL para consultar APIs abiertas llamadas subgrafos, para recuperar los datos que están indexados en la red. Con The Graph, los desarrolladores pueden construir aplicaciones sin servidor que se ejecutan completamente en la infraestructura pública. +The Graph Network es un protocolo de indexación descentralizado, el cual permite organizar los datos de la blockchain. Las aplicaciones utilizan GraphQL para consultar APIs públicas, llamadas subgrafos, que sirven para recuperar los datos que están indexados en la red. Con The Graph, los desarrolladores pueden construir sus aplicaciones completamente en una infraestructura pública. -> Dirección del token GRT [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +> GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) ## Descripción -The Graph Network está formada por Indexadores, Curadores y Delegadores que proporcionan servicios a la red y sirven datos a las aplicaciones Web3. Los consumidores utilizan las aplicaciones y consumen los datos. +The Graph Network está formada por Indexadores, Curadores y Delegadores que proporcionan servicios a la red y proveen datos a las aplicaciones Web3. Los clientes utilizan estas aplicaciones y consumen los datos. -![Economía de los Tokens](/img/Network-roles@2x.png) +![Economía de los tokens](/img/Network-roles@2x.png) -Para garantizar la seguridad económica de The Graph Network y la integridad de los datos que se consultan, los participantes ponen en staking y utilizan Graph Tokens (GRT). GRT es un token de trabajo que es un ERC-20 en la blockchain de Ethereum, utilizado para asignar recursos en la red. Los Indexadores, Curadores y Delegadores activos pueden prestar servicios y obtener ingresos de la red, proporcionales a la cantidad de trabajo que realizan y a su participación en GRT. +Para garantizar la seguridad económica de The Graph Network y la integridad de los datos que se consultan, los participantes colocan en staking sus Graph Tokens (GRT). GRT es un token alojado en el protocolo ERC-20 de la blockchain Ethereum, utilizado para asignar recursos en la red. Los Indexadores, Curadores y Delegadores pueden prestar sus servicios y obtener ingresos por medio de la red, en proporción a su desempeño y la cantidad de GRT que hayan colocado en staking. From 28f2b22c1e14f11f49c601907b1371c8f938acc8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Tue, 18 Jan 2022 10:23:01 -0500 Subject: [PATCH 172/432] New translations assemblyscript-api.mdx (Spanish) --- pages/es/developer/assemblyscript-api.mdx | 52 +++++++++++------------ 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/pages/es/developer/assemblyscript-api.mdx b/pages/es/developer/assemblyscript-api.mdx index c070c682f6e6..a6d3c208ef81 100644 --- a/pages/es/developer/assemblyscript-api.mdx +++ b/pages/es/developer/assemblyscript-api.mdx @@ -2,54 +2,54 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) +> Nota: ten en cuenta que si creaste un subgraph usando el `graph-cli`/`graph-ts` en su versión `0.22.0`, debes saber que estás utilizando una versión antigua del AssemblyScript y te recomendamos mirar la [`guía para migrar`](/developer/assemblyscript-migration-guide) tu código. -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Está página explica que APIs usar para recibir ciertos datos de los subgrafos. Dos tipos de estas APIs se describen a continuación: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- La [librería de Graph TypeScript](https://github.com/graphprotocol/graph-ts) (`graph-ts`) y +- el generador de códigos provenientes de los archivos del subgraph, `graph codegen`. -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +También es posible añadir otras librerías, siempre y cuando sean compatible con [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Debido a que ese lenguaje de mapeo es el que usamos, la [wiki de AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) es una fuente muy completa para las características de este lenguaje y contiene una librería estándar que te puede resultar útil. -## Installation +## Instalación -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: +Los subgrafos creados con [`graph init`](/developer/create-subgraph-hosted) vienen configurados previamente. Todo lo necesario para instalar estás configuraciones lo podrás encontrar en uno de los siguientes comandos: ```sh yarn install # Yarn npm install # NPM ``` -If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: +Si el subgrafo fue creado con scratch, uno de los siguientes dos comandos podrá instalar la librería TypeScript como una dependencia: ```sh yarn add --dev @graphprotocol/graph-ts # Yarn npm install --save-dev @graphprotocol/graph-ts # NPM ``` -## API Reference +## Referencias de API -The `@graphprotocol/graph-ts` library provides the following APIs: +La librería de `@graphprotocol/graph-ts` proporciona las siguientes APIs: -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and the Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. -- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. +- Una API `ethereum` para trabajar con los contratos inteligentes alojados en Ethereum, sus respectivos eventos, bloques, transacciones y valores. +- Un `almacenamiento` para cargar y guardar entidades en Graph Node. +- Una API de `registro` para registrar los mensajes output de The Graph y el Graph Explorer. +- Una API para `ipfs` que permite cargar archivos provenientes de IPFS. +- Una API de `json` para analizar datos en formato JSON. +- Una API para `crypto` que permite usar funciones criptográficas. +- Niveles bajos que permiten traducir entre los distintos sistemas, tales como, Ethereum, JSON, GraphQL y AssemblyScript. -### Versions +### Versiones -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. +La `apiVersion` en el manifiesto del subgrafo especifica la versión de la API correspondiente al mapeo que está siendo ejecutado en el Graph Node de un subgrafo en específico. La versión actual para la APÍ de mapeo es la 0.0.6. -| Version | Release notes | -|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Notas del lanzamiento | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Se agregó la casilla `nonce` a las Transacciones de Ethereum, se
añadió `baseFeePerGas` para los bloques de Ethereum | +| 0.0.5 | Se actualizó la versión del AssemblyScript a la v0.19.10 (esta incluye cambios importantes, recomendamos leer la [`guía de migración`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renombrado como `ethereum.transaction.gasLimit` | +| 0.0.4 | Añadido la casilla de `functionSignature` para la función de Ethereum SmartContractCall | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types From 01448c2e047e8ed76fcaf146fa82fa584bc7f722 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Tue, 18 Jan 2022 11:40:32 -0500 Subject: [PATCH 173/432] New translations assemblyscript-api.mdx (Spanish) --- pages/es/developer/assemblyscript-api.mdx | 96 +++++++++++------------ 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/pages/es/developer/assemblyscript-api.mdx b/pages/es/developer/assemblyscript-api.mdx index a6d3c208ef81..686389993b72 100644 --- a/pages/es/developer/assemblyscript-api.mdx +++ b/pages/es/developer/assemblyscript-api.mdx @@ -43,19 +43,19 @@ La librería de `@graphprotocol/graph-ts` proporciona las siguientes APIs: La `apiVersion` en el manifiesto del subgrafo especifica la versión de la API correspondiente al mapeo que está siendo ejecutado en el Graph Node de un subgrafo en específico. La versión actual para la APÍ de mapeo es la 0.0.6. -| Version | Notas del lanzamiento | -|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| 0.0.6 | Se agregó la casilla `nonce` a las Transacciones de Ethereum, se
añadió `baseFeePerGas` para los bloques de Ethereum | -| 0.0.5 | Se actualizó la versión del AssemblyScript a la v0.19.10 (esta incluye cambios importantes, recomendamos leer la [`guía de migración`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renombrado como `ethereum.transaction.gasLimit` | -| 0.0.4 | Añadido la casilla de `functionSignature` para la función de Ethereum SmartContractCall | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Notas del lanzamiento | +|:-------:| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Se agregó la casilla `nonce` a las Transacciones de Ethereum, se
añadió `baseFeePerGas` para los bloques de Ethereum | +| 0.0.5 | Se actualizó la versión del AssemblyScript a la v0.19.10 (esta incluye cambios importantes, recomendamos leer la [`guía de migración`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` actualizada a `ethereum.transaction.gasLimit` | +| 0.0.4 | Añadido la casilla de `functionSignature` para la función de Ethereum SmartContractCall | +| 0.0.3 | Añadida la casilla `from` para la función de Ethereum Call
`ethereum.call.address` actualizada a `ethereum.call.to` | +| 0.0.2 | Añadida la casilla de `input` para la función de Ethereum Transaction | ### Built-in Types -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). +La documentación sobre las actualizaciones integradas en AssemblyScript puedes encontrarla en la [wiki de AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki/Types). -The following additional types are provided by `@graphprotocol/graph-ts`. +Las siguientes integraciones son proporcionada por `@graphprotocol/graph-ts`. #### ByteArray @@ -63,24 +63,24 @@ The following additional types are provided by `@graphprotocol/graph-ts`. import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray` represents an array of `u8`. +`ByteArray` representa una matriz de `u8`. -_Construction_ +_Construcción_ -- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `x.times(y: BigInt): BigInt` – can be written as `x * y`. +- `fromI32(x: i32): ByteArray` - Descompuesta en `x` bytes. +- `fromHexString(hex: string): ByteArray` - La longitud de la entrada debe ser uniforme. Prefijo `0x` es opcional. -_Type conversions_ +_Tipo de conversiones_ -- `toHexString(): string` - Converts to a hex string prefixed with `0x`. -- `toString(): string` - Interprets the bytes as a UTF-8 string. -- `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- `toHexString(): string` - Convierte un prefijo hexadecimal iniciado con `0x`. +- `toString(): string` - Interpreta los bytes en una cadena UTF-8. +- `toBase58(): string` - Codifica los bytes en una cadena base58. +- `toU32(): u32` - Interpeta los bytes en base a little-endian `u32`. Se ejecuta en casos de un overflow. +- `toI32(): i32` - Interpreta los bytes en base a little-endian `i32`. Se ejecuta en casos de un overflow. -_Operators_ +_Operadores_ -- `equals(y: ByteArray): bool` – can be written as `x == y`. +- `equals(y: ByteArray): bool` – se puede escribir como `x == y`. #### BigDecimal @@ -88,30 +88,30 @@ _Operators_ import { BigDecimal } from '@graphprotocol/graph-ts' ``` -`BigDecimal` is used to represent arbitrary precision decimals. +`BigDecimal` se usa para representar una precisión decimal arbitraria. -_Construction_ +_Construcción_ -- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. -- `static fromString(s: string): BigDecimal` – parses from a decimal string. +- `constructor(bigInt: BigInt)` – creará un `BigDecimal` en base a un`BigInt`. +- `static fromString(s: string): BigDecimal` – analizará una cadena de decimales. -_Type conversions_ +_Tipo de conversiones_ -- `toString(): string` – prints to a decimal string. +- `toString(): string` – colocará una cadena de decimales. -_Math_ +_Matemática_ -- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. -- `dividedBy(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. -- `le(y: BigDecimal): bool` – can be written as `x <= y`. -- `gt(y: BigDecimal): bool` – can be written as `x > y`. -- `ge(y: BigDecimal): bool` – can be written as `x >= y`. -- `neg(): BigDecimal` - can be written as `-x`. +- `plus(y: BigDecimal): BigDecimal` – puede escribirse como `x + y`. +- `minus(y: BigDecimal): BigDecimal` – puede escribirse como `x - y`. +- `times(y: BigDecimal): BigDecimal` – puede escribirse como `x * y`. +- `dividedBy(y: BigDecimal): BigDecimal` – puede escribirse como `x / y`. +- `equals(y: BigDecimal): bool` – puede escribirse como `x == y`. +- `notEqual(y: BigDecimal): bool` – puede escribirse como `x != y`. +- `lt(y: BigDecimal): bool` – puede escribirse como `x < y`. +- `lt(y: BigDecimal): bool` – puede escribirse como `x < y`. +- `gt(y: BigDecimal): bool` – puede escribirse como `x > y`. +- `ge(y: BigDecimal): bool` – puede escribirse como `x >= y`. +- `neg(): BigDecimal` - puede escribirse como `-x`. #### BigInt @@ -119,25 +119,25 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +`BigInt` es usado para representar nuevos enteros grandes. Esto incluye valores de Ethereum similares a `uint32` hacia `uint256` y `int64` hacia `int256`. Todo por debajo de `uint32`. como el `int32`, `uint24` o `int8` se representa como `i32`. -The `BigInt` class has the following API: +La clase `BigInt` tiene la siguiente API: -_Construction_ +_Construcción_ -- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. +- `BigInt.fromI32(x: i32): BigInt` – creará un `BigInt` en base a un `i32`. +- `BigInt.fromString(s: string): BigInt`– Analizará un `BigInt` dentro de una cadena. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interpretará `bytes` sin firmar, o un little-endian entero. Si tu entrada es big-endian, deberás llamar primero el código `.reverse()`. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – interpretará los `bytes` como una firma, en un little-endian entero. Si tu entrada es big-endian, deberás llamar primero el código `.reverse()`. - _Type conversions_ + _Tipo de conversiones_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. - `x.toString(): string` – turns `BigInt` into a decimal number string. - `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. - `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. -_Math_ +_Matemática_ - `x.plus(y: BigInt): BigInt` – can be written as `x + y`. - `x.minus(y: BigInt): BigInt` – can be written as `x - y`. From 1f16062472bb74b8e6c906b42484ebc7df27ef27 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Tue, 18 Jan 2022 11:40:38 -0500 Subject: [PATCH 174/432] New translations curating.mdx (Spanish) --- pages/es/curating.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/curating.mdx b/pages/es/curating.mdx index 2ab3fddbfbaf..425cb5608b6f 100644 --- a/pages/es/curating.mdx +++ b/pages/es/curating.mdx @@ -64,7 +64,7 @@ Los Indexadores pueden encontrar subgrafos para indexar en función de las seña 3. Cuando los curadores queman sus acciones para retirar los GRT, se reducirá la participación de GRT de las acciones restantes. Ten en cuenta que, en algunos casos, los curadores pueden decidir quemar sus acciones, **todas al mismo tiempo**. Esta situación puede ser común si un desarrollador de dApp deja de actualizar la aplicación, no sigue consultando su subgrafo o si falla el mismo. Como resultado, es posible que los curadores solo puedan retirar una fracción de sus GRT iniciales. Si buscas un rol dentro red que conlleve menos riesgos, consulta \[Delegators\] (https://thegraph.com/docs/delegating). 4. Un subgrafo puede fallar debido a un error. Un subgrafo fallido no acumula tarifas de consulta. Como resultado, tendrás que esperar hasta que el desarrollador corrija el error e implemente una nueva versión. - Si estás suscrito a la versión más reciente de un subgrafo, tus acciones se migrarán automáticamente a esa nueva versión. Esto incurrirá en una tarifa de curación del 0.5%. - - Si has señalado en una versión de subgrafo específica y falla, tendrás que quemar manualmente tus acciones de curación. Ten en cuenta que puedes recibir más o menos GRT de los que depositaste inicialmente en la curva de curación, y esto es un riesgo que todo curador acepta al empezar. You can then signal on the new subgraph version, thus incurring a 1% curation tax. + - Si has señalado en una versión de subgrafo específica y falla, tendrás que quemar manualmente tus acciones de curación. Ten en cuenta que puedes recibir más o menos GRT de los que depositaste inicialmente en la curva de curación, y esto es un riesgo que todo curador acepta al empezar. Luego podrás firmar la nueva versión del subgrafo, incurriendo así en un impuesto de curación equivalente al 1%. ## Preguntas frecuentes sobre Curación @@ -91,7 +91,7 @@ Se sugiere que no actualices tus subgrafos con demasiada frecuencia. Consulta la Las participaciones de un curador no se pueden "comprar" o "vender" como otros tokens ERC20 con los que seguramente estás familiarizado. Solo pueden anclar (crearse) o quemarse (destruirse) a lo largo de la curva de vinculación de un subgrafo en particular. La cantidad de GRT necesaria para generar una nueva señal y la cantidad de GRT que recibes cuando quemas tu señal existente, está determinada por esa curva de vinculación. Como curador, debes saber que cuando quemas tus acciones de curación para retirar GRT, puedes terminar con más o incluso con menos GRT de los que depositaste en un inicio. -Still confused? Check out our Curation video guide below: +¿Sigues confundido? Te invitamos a echarle un vistazo a nuestra guía en un vídeo que aborda todo sobre la curación:
-### Multisig Users +### 多重签名用户 -Multisigs are smart-contracts that can exist only on the network they have been created, so if you created one on Ethereum Mainnet - it will only exist on Mainnet. Since our billing uses Polygon, if you were to bridge GRT to the multisig address on Polygon the funds would be lost. +多重合约是只能存在于它们所创建的网络上的智能合约,所以如果你在以太坊主网上创建了一个--它将只存在于主网上。 由于我们的账单使用Polygon,如果你将GRT桥接到Polygon的多符号地址上,资金就会丢失。 -To overcome this issue, we created [a dedicated tool](https://multisig-billing.thegraph.com/) that will help you deposit GRT on our billing contract (on behalf of the multisig) with a standard wallet / EOA (an account controlled by a private key). +为了克服这个问题,我们创建了 [一个专门的工具](https://multisig-billing.thegraph.com/),它将帮助你用一个标准的钱包/EOA(一个由私钥控制的账户)在我们的计费合同上存入GRT(代表multisig)。 -You can access our Multisig Billing Tool here: https://multisig-billing.thegraph.com/ +你可以在这里访问我们的Multisig计费工具:https://multisig-billing.thegraph.com/ -This tool will guide you to go through the following steps: +这个工具将指导你完成以下步骤: -1. Connect your standard wallet / EOA (this wallet needs to own some ETH as well as the GRT you want to deposit) -2. Bridge GRT to Polygon. You will have to wait 7-8 minutes after the transaction is complete for the bridge transfer to be finalized. -3. Once your GRT is available on your Polygon balance you can deposit them to the billing contract while specifying the multisig address you are funding in the `Multisig Address` field. +1. 连接你的标准钱包/EOA(这个钱包需要拥有一些ETH以及你要存入的GRT)。 +2. 桥GRT到Polygon。 在交易完成后,你需要等待7-8分钟,以便最终完成桥梁转移。 +3. 一旦你的GRT在你的Polygon余额中可用,你就可以把它们存入账单合同,同时在`Multisig地址栏` 中指定你要资助的multisig地址。 -Once the deposit transaction has been confirmed you can go back to [Subgraph Studio](https://thegraph.com/studio/) and connect with your Gnosis Safe Multisig to create API keys and use them to generate queries. +一旦存款交易得到确认,你就可以回到 [Subgraph Studio](https://thegraph.com/studio/),并与你的Gnosis Safe Multisig连接,以创建API密钥并使用它们来生成查询。 -Those queries will generate invoices that will be paid automatically using the multisig’s billing balance. +这些查询将产生发票,这些发票将使用multisig的账单余额自动支付。 From 855180d34fd8c9e0c79340a60893625b813f3043 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 01:39:07 -0500 Subject: [PATCH 299/432] New translations create-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/create-subgraph-hosted.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index 92449d9a7cb6..0b45b408cf57 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -576,9 +576,9 @@ dataSources: handler: handleNewExchange ``` -### Data Source Templates for Dynamically Created Contracts +### قوالب مصدر البيانات للعقود التي تم إنشاؤها ديناميكيا -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +بعد ذلك ، أضف _ قوالب مصدر البيانات _ إلى الـ manifest. وهي متطابقة مع مصادر البيانات العادية ، باستثناء أنها تفتقر إلى عنوان عقد معرف مسبقا تحت ` source `. عادة ، يمكنك تعريف قالب واحد لكل نوع من أنواع العقود الفرعية المدارة أو المشار إليها بواسطة العقد الأصلي. ```yaml dataSources: @@ -612,7 +612,7 @@ templates: handler: handleRemoveLiquidity ``` -### Instantiating a Data Source Template +### إنشاء قالب مصدر البيانات In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. From 8ca467f960316492a4e9ab1c49e7c86542fd769f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 02:35:50 -0500 Subject: [PATCH 300/432] New translations create-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/create-subgraph-hosted.mdx | 38 +++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index 0b45b408cf57..dd73aa078dcc 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -614,7 +614,7 @@ templates: ### إنشاء قالب مصدر البيانات -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +في الخطوة الأخيرة ، تقوم بتحديث mapping عقدك الرئيسي لإنشاء instance لمصدر بيانات ديناميكي من أحد القوالب. في هذا المثال ، يمكنك تغيير mapping العقد الرئيسي لاستيراد قالب ` Exchange ` واستدعاء method الـ`Exchange.create(address)` لبدء فهرسة عقد التبادل الجديد. ```typescript import { Exchange } from '../generated/templates' @@ -626,13 +626,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> ** ملاحظة: ** مصدر البيانات الجديد سيعالج فقط الاستدعاءات والأحداث للكتلة التي تم إنشاؤها فيه وجميع الكتل التالية ، ولكنه لن يعالج البيانات التاريخية ، أي البيانات الموجودة في الكتل السابقة. > -> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. +> إذا كانت الكتل السابقة تحتوي على بيانات ذات صلة بمصدر البيانات الجديد ، فمن الأفضل فهرسة تلك البيانات من خلال قراءة الحالة الحالية للعقد وإنشاء كيانات تمثل تلك الحالة في وقت إنشاء مصدر البيانات الجديد. -### Data Source Context +### سياق مصدر البيانات -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +تسمح سياقات مصدر البيانات بتمرير تكوين إضافي عند عمل instantiating للقالب. في مثالنا ، لنفترض أن التبادلات مرتبطة بزوج تداول معين ، والذي تم تضمينه في حدث ` NewExchange `. That information can be passed into the instantiated data source, like so: ```typescript import { Exchange } from '../generated/templates' @@ -644,7 +644,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -Inside a mapping of the `Exchange` template, the context can then be accessed: +داخل mapping قالب ` Exchange ` ، يمكن الوصول إلى السياق بعد ذلك: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -653,11 +653,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -There are setters and getters like `setString` and `getString` for all value types. +هناك setters و getters مثل ` setString ` و ` getString ` لجميع أنواع القيم. ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +يعد ` startBlock ` إعدادا اختياريا يسمح لك بتحديد كتلة في السلسلة والتي سيبدأ مصدر البيانات بالفهرسة. تعيين كتلة البدء يسمح لمصدر البيانات بتخطي الملايين من الكتل التي ربما ليست ذات صلة. عادةً ما يقوم مطور الرسم البياني الفرعي بتعيين ` startBlock ` إلى الكتلة التي تم فيها إنشاء العقد الذكي لمصدر البيانات. ```yaml dataSources: @@ -683,23 +683,23 @@ dataSources: handler: handleNewEvent ``` -> **Note:** The contract creation block can be quickly looked up on Etherscan: +> ** ملاحظة: ** يمكن البحث عن كتلة إنشاء العقد بسرعة على Etherscan: > -> 1. Search for the contract by entering its address in the search bar. -> 2. Click on the creation transaction hash in the `Contract Creator` section. -> 3. Load the transaction details page where you'll find the start block for that contract. +> 1. ابحث عن العقد بإدخال عنوانه في شريط البحث. +> 2. انقر فوق hash إجراء الإنشاء في قسم `Contract Creator`. +> 3. قم بتحميل صفحة تفاصيل الإجراء حيث ستجد كتلة البدء لذلك العقد. -## Call Handlers +## معالجات الاستدعاء -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum. Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +بينما توفر الأحداث طريقة فعالة لجمع التغييرات ذات الصلة بحالة العقد ، تتجنب العديد من العقود إنشاء سجلات لتحسين تكاليف الغاز. في هذه الحالات ، يمكن لـ subgraph الاشتراك في الاستدعاء الذي يتم إجراؤه على عقد مصدر البيانات. يتم تحقيق ذلك من خلال تعريف معالجات الاستدعاء التي تشير إلى signature الدالة ومعالج الـ mapping الذي سيعالج الاستدعاءات لهذه الدالة. لمعالجة هذه المكالمات ، سيتلقى معالج الـ mapping الـ`ethereum.Call` كـ argument مع المدخلات المكتوبة والمخرجات من الاستدعاء. ستؤدي الاستدعاءات التي يتم إجراؤها على أي عمق في سلسلة استدعاء الاجراء إلى تشغيل الـ mapping، مما يسمح بالتقاط النشاط مع عقد مصدر البيانات من خلال عقود الـ proxy. -Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. +لن يتم تشغيل معالجات الاستدعاء إلا في إحدى الحالتين: عندما يتم استدعاء الدالة المحددة بواسطة حساب آخر غير العقد نفسه أو عندما يتم تمييزها على أنها خارجية في Solidity ويتم استدعاؤها كجزء من دالة أخرى في نفس العقد. -> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. +> ** ملاحظة: ** معالجات الاستدعاء غير مدعومة في Rinkeby أو Goerli أو Ganache. تعتمد معالجات الاستدعاء حاليا على Parity tracing API و هذه الشبكات لا تدعمها. -### Defining a Call Handler +### تعريف معالج الاستدعاء -To define a call handler in your manifest simply add a `callHandlers` array under the data source you would like to subscribe to. +لتعريف معالج استدعاء في الـ manifest الخاص بك ، ما عليك سوى إضافة مصفوفة ` callHandlers ` أسفل مصدر البيانات الذي ترغب في الاشتراك فيه. ```yaml dataSources: @@ -724,7 +724,7 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +الـ `function` هي توقيع الدالة المعياري لفلترة الاستدعاءات من خلالها. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. ### Mapping Function From 8b8ac74686aa6f1cdf18194d0adbf503ff289348 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 02:35:51 -0500 Subject: [PATCH 301/432] New translations deploy-subgraph-studio.mdx (Chinese Simplified) --- pages/zh/studio/deploy-subgraph-studio.mdx | 48 +++++++++++----------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/zh/studio/deploy-subgraph-studio.mdx b/pages/zh/studio/deploy-subgraph-studio.mdx index 2155d8fe8976..62f614ab7d15 100644 --- a/pages/zh/studio/deploy-subgraph-studio.mdx +++ b/pages/zh/studio/deploy-subgraph-studio.mdx @@ -1,68 +1,68 @@ --- -title: Deploy a Subgraph to the Subgraph Studio +title: 将一个子图部署到子图工作室 --- -Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: +将一个子图部署到子图工作室是非常简单的。 你可以通过以下步骤完成: -- Install The Graph CLI (with both yarn and npm) -- Create your Subgraph in the Subgraph Studio -- Authenticate your account from the CLI -- Deploying a Subgraph to the Subgraph Studio +- 安装Graph CLI(同时使用yarn和npm)。 +- 在子图工作室中创建你的子图 +- 从CLI认证你的账户 +- 将一个子图部署到子图工作室 -## Installing Graph CLI +## 安装Graph CLI -We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. +我们使用相同的CLI将子图部署到我们的 [托管服务](https://thegraph.com/hosted-service/) 和[Subgraph Studio](https://thegraph.com/studio/)中。 以下是安装graph-cli的命令。 这可以用npm或yarn来完成。 -**Install with yarn:** +**用yarn安装:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +**用npm安装:** ```bash npm install -g @graphprotocol/graph-cli ``` -## Create your Subgraph in Subgraph Studio +## 在子图工作室中创建你的子图 -Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. +在部署你的实际子图之前,你需要在 [子图工作室](https://thegraph.com/studio/)中创建一个子图。 我们建议你阅读我们的[Studio文档](/studio/subgraph-studio)以了解更多这方面的信息。 -## Initialize your Subgraph +## 初始化你的子图 -Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: +一旦你的子图在子图工作室中被创建,你可以用这个命令初始化子图代码。 ```bash graph init --studio ``` -The `` value can be found on your subgraph details page in Subgraph Studio: +``值可以在Subgraph Studio中你的子图详情页上找到。 ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network and abi that you want to query. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +运行`graph init`后,你会被要求输入你想查询的合同地址、网络和abi。 这样做将在你的本地机器上生成一个新的文件夹,里面有一些基本代码,可以开始在你的子图上工作。 然后,你可以最终确定你的子图,以确保它按预期工作。 -## Graph Auth +## Graph 认证 -Before being able to deploy your subgraph to Subgraph Studio, you need to login to your account within the CLI. To do this, you will need your deploy key that you can find on your "My Subgraphs" page or on your subgraph details page. +在能够将你的子图部署到子图工作室之前,你需要在CLI中登录到你的账户。 要做到这一点,你将需要你的部署密钥,你可以在你的 "我的子图 "页面或子图的详细信息页面上找到。 -Here is the command that you need to use to authenticate from the CLI: +以下是你需要使用的命令,以从CLI进行认证: ```bash graph auth --studio ``` -## Deploying a Subgraph to Subgraph Studio +## 将一个子图部署到子图工作室 -Once you are ready, you can deploy your subgraph to Subgraph Studio. Doing this won't publish your subgraph to the decentralized network, it will only deploy it to your Studio account where you will be able to test it and update the metadata. +一旦你准备好了,你可以将你的子图部署到子图工作室。 这样做不会将你的子图发布到去中心化的网络中,它只会将它部署到你的Studio账户中,在那里你将能够测试它并更新元数据。 -Here is the CLI command that you need to use to deploy your subgraph. +这里是你需要使用的CLI命令,以部署你的子图。 ```bash graph deploy --studio ``` -After running this command, the CLI will ask for a version label, you can name it however you want, you can use labels such as `0.1` and `0.2` or use letters as well such as `uniswap-v2-0.1` . Those labels will be visible in Graph Explorer and can be used by curators to decide if they want to signal on this version or not, so choose them wisely. +运行这个命令后,CLI会要求提供一个版本标签,你可以随意命名,你可以使用 `0.1`和 `0.2`这样的标签,或者也可以使用字母,如 `uniswap-v2-0.1` . 这些标签将在Graph Explorer中可见,并可由策展人用来决定是否要在这个版本上发出信号,所以要明智地选择它们。 -Once deployed, you can test your subgraph in Subgraph Studio using the playground, deploy another version if needed, update the metadata, and when you are ready, publish your subgraph to Graph Explorer. +一旦部署完毕,你可以在子图工作室中使用控制面板测试你的子图,如果需要的话,可以部署另一个版本,更新元数据,当你准备好后,将你的子图发布到Graph Explorer。 From 5e22cfc46470300a7c91ebd3037dbad7e3cb3ced Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 02:35:52 -0500 Subject: [PATCH 302/432] New translations studio-faq.mdx (Chinese Simplified) --- pages/zh/studio/studio-faq.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/zh/studio/studio-faq.mdx b/pages/zh/studio/studio-faq.mdx index 4db4d7ccddaa..cb63319a714e 100644 --- a/pages/zh/studio/studio-faq.mdx +++ b/pages/zh/studio/studio-faq.mdx @@ -1,21 +1,21 @@ --- -title: Subgraph Studio FAQs +title: 子图工作室常见问题 --- -### 1. How do I create an API Key? +### 1. 我如何创建一个API密钥? -In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. +在Subgraph Studio中,你可以根据需要创建API密钥,并为每个密钥添加安全设置。 -### 2. Can I create multiple API Keys? +### 2. 我可以创建多个API密钥吗? -A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +是的,可以。 你可以创建多个API密钥,在不同的项目中使用。 点击 [这里](https://thegraph.com/studio/apikeys/)查看。 -### 3. How do I restrict a domain for an API Key? +### 3. 我如何为API密钥限制一个域名? -After creating an API Key, in the Security section you can define the domains that can query a specific API Key. +创建了API密钥后,在安全部分,你可以定义可以查询特定API密钥的域。 -### 4. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +### 4. 如果我不是我想使用的子图的开发者,我怎样才能找到子图的查询URL? -You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in the Subgraph Studio. +你可以在The Graph Explorer的Subgraph Details部分找到每个子图的查询URL。 当你点击 "查询 "按钮时,你将被引导到一个窗格,在这里你可以查看你感兴趣的子图的查询URL。 然后你可以把 <api_key>/code> 占位符替换成你想在Subgraph Studio中利用的API密钥。

-Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +

请记住,你可以创建一个API密钥并查询发布到网络上的任何子图,即使你自己建立了一个子图。 这些通过新的API密钥进行的查询,与网络上的任何其他查询一样,都是付费查询。

From 49242f03abbba678c7a5f3930830ae63b315e71f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 03:38:11 -0500 Subject: [PATCH 303/432] New translations create-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/create-subgraph-hosted.mdx | 60 +++++++++---------- 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index dd73aa078dcc..080c96288f21 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -724,11 +724,11 @@ dataSources: handler: handleCreateGravatar ``` -الـ `function` هي توقيع الدالة المعياري لفلترة الاستدعاءات من خلالها. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +الـ `function` هي توقيع الدالة المعياري لفلترة الاستدعاءات من خلالها. خاصية `handler` هي اسم الدالة في الـ mapping الذي ترغب في تنفيذه عندما يتم استدعاء الدالة المستهدفة في عقد مصدر البيانات. -### Mapping Function +### دالة الـ Mapping -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +كل معالج استدعاء يأخذ بارامترا واحدا له نوع يتوافق مع اسم الدالة التي تم استدعاؤها. في مثال الـ subgraph أعلاه ، يحتوي الـ mapping على معالج عندما يتم استدعاء الدالة ` createGravatar ` ويتلقى البارامتر ` CreateGravatarCall ` كـ argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -743,22 +743,22 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum. Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +الدالة ` handleCreateGravatar ` تأخذ ` CreateGravatarCall ` جديد وهو فئة فرعية من`ethereum.Call`, ، مقدم بواسطة `graphprotocol/graph-ts@`, والذي يتضمن المدخلات والمخرجات المكتوبة للاستدعاء. يتم إنشاء النوع ` CreateGravatarCall ` من أجلك عندما تشغل`graph codegen`. -## Block Handlers +## معالجات الكتلة -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. +بالإضافة إلى الاشتراك في أحداث العقد أو استدعاءات الدوال، قد يرغب الـ subgraph في تحديث بياناته عند إلحاق كتل جديدة بالسلسلة. لتحقيق ذلك ، يمكن لـ subgraph تشغيل دالة بعد كل كتلة أو بعد الكتل التي تطابق فلترا معرفا مسبقا. -### Supported Filters +### الفلاتر المدعومة ```yaml filter: kind: call ``` -_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ +_سيتم استدعاء المعالج المعرف مرة واحدة لكل كتلة تحتوي على استدعاء للعقد (مصدر البيانات) الذي تم تعريف المعالج ضمنه._ -The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. +عدم وجود فلتر لمعالج الكتلة سيضمن أن المعالج يتم استدعاؤه في كل كتلة. يمكن أن يحتوي مصدر البيانات على معالج كتلة واحد فقط لكل نوع فلتر. ```yaml dataSources: @@ -785,23 +785,23 @@ dataSources: kind: call ``` -### Mapping Function +### دالة الـ Mapping -The mapping function will receive an `ethereum. Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +دالة الـ mapping ستتلقى `ethereum.Block` كوسيطتها الوحيدة. مثل دوال الـ mapping للأحداث ، يمكن لهذه الدالة الوصول إلى كيانات الـ subgraph الموجودة في المخزن، واستدعاء العقود الذكية وإنشاء الكيانات أو تحديثها. ```typescript import { ethereum } from '@graphprotocol/graph-ts' -export function handleBlock(block: ethereum. Block): void { +export function handleBlock(block: ethereum.Block): void { let id = block.hash.toHex() let entity = new Block(id) entity.save() } ``` -## Anonymous Events +## أحداث الـ Anonymous -If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: +إذا كنت بحاجة إلى معالجة أحداث anonymous في Solidity ، فيمكن تحقيق ذلك من خلال توفير الموضوع 0 للحدث ، كما في المثال: ```yaml eventHandlers: @@ -810,20 +810,20 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +سيتم تشغيل حدث فقط عندما يتطابق كل من التوقيع والموضوع 0. بشكل افتراضي ، `topic0` يساوي hash توقيع الحدث. -## Experimental features +## الميزات التجريبية -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +بدءًا من ` specVersion ` ` 0.0.4 ` ، يجب الإعلان صراحة عن ميزات الـ subgraph في قسم `features` في المستوى العلوي من ملف الـ manifest ، باستخدام اسم `camelCase` الخاص بهم ، كما هو موضح في الجدول أدناه: -| Feature | Name | -| --------------------------------------------------------- | ------------------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -| [IPFS on Ethereum Contracts](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | +| الميزة | الاسم | +| ----------------------------------------------------- | ------------------------- | +| [أخطاء غير فادحة](#non-fatal-errors) | `nonFatalErrors` | +| [البحث عن نص كامل](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| [IPFS على عقود Ethereum](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +على سبيل المثال ، إذا كان الـ subgraph يستخدم ** بحث النص الكامل ** و ** أخطاء غير فادحة ** ، فإن حقل `features` في الـ manifest يجب أن يكون: ```yaml specVersion: 0.0.4 @@ -834,21 +834,21 @@ features: dataSources: ... ``` -Note that using a feature without declaring it will incur in a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +لاحظ أن استخدام ميزة دون الإعلان عنها سيؤدي إلى حدوث ** خطأ تحقق من الصحة ** أثناء نشر الـ subgraph ، ولكن لن تحدث أخطاء إذا تم الإعلان عن الميزة ولكن لم يتم استخدامها. -### IPFS on Ethereum Contracts +### IPFS على عقود Ethereum -A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. +حالة الاستخدام الشائعة لدمج IPFS مع Ethereum هي تخزين البيانات على IPFS التي ستكون مكلفة للغاية للحفاظ عليها في السلسلة ، والإشارة إلى IPFS hash في عقود Ethereum. -Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +بالنظر إلى IPFS hashes هذه ، يمكن لـ subgraphs قراءة الملفات المقابلة من IPFS باستخدام ` ipfs.cat ` و ` ipfs.map `. للقيام بذلك بشكل موثوق ، من الضروري أن يتم تثبيت هذه الملفات على عقدة IPFS التي تتصل بها Graph Node التي تقوم بفهرسة الـ subgraph. في حالة [hosted service](https://thegraph.com/hosted-service),يكون هذا [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). -> **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> ** ملاحظة: ** لا تدعم شبكة Graph حتى الآن ` ipfs.cat ` و ` ipfs.map ` ، ويجب على المطورين عدم النشر الـ subgraphs للشبكة باستخدام تلك الوظيفة عبر الـ Studio. In order to make this easy for subgraph developers, The Graph team wrote a tool for transfering files from one IPFS node to another, called [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). > **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. -### Non-fatal errors +### أخطاء غير فادحة Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. From 873c8c3b2ea2337b4ba7ab18f096f7ea22bec733 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 03:38:12 -0500 Subject: [PATCH 304/432] New translations developer-faq.mdx (Arabic) --- pages/ar/developer/developer-faq.mdx | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ar/developer/developer-faq.mdx b/pages/ar/developer/developer-faq.mdx index 41449c60e5ab..a6f67e441b2c 100644 --- a/pages/ar/developer/developer-faq.mdx +++ b/pages/ar/developer/developer-faq.mdx @@ -1,26 +1,26 @@ --- -title: Developer FAQs +title: الأسئلة الشائعة للمطورين --- -### 1. Can I delete my subgraph? +### 1. هل يمكنني حذف ال Subgraph الخاص بي؟ -It is not possible to delete subgraphs once they are created. +لا يمكن حذف ال Subgraph بمجرد إنشائها. -### 2. Can I change my subgraph name? +### 2. هل يمكنني تغيير اسم ال Subgraph الخاص بي؟ -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير الاسم. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك حتى يسهل البحث عنه والتعرف عليه من خلال ال Dapps الأخرى. -### 3. Can I change the GitHub account associated with my subgraph? +### 3. هل يمكنني تغيير حساب GitHub المرتبط ب Subgraph الخاص بي؟ -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير حساب GitHub المرتبط. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك. -### 4. Am I still able to create a subgraph if my smart contracts don't have events? +### 4. هل يمكنني إنشاء Subgraph إذا لم تكن العقود الذكية الخاصة بي تحتوي على أحداث؟ -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. +من المستحسن جدا أن تقوم بإنشاء عقودك الذكية بحيث يكون لديك أحداث مرتبطة بالبيانات التي ترغب في الاستعلام عنها. يتم تشغيل معالجات الأحداث في subgraph بواسطة أحداث العقد، وهي إلى حد بعيد أسرع طريقة لاسترداد البيانات المفيدة. -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. +إذا كانت العقود التي تعمل معها لا تحتوي على أحداث، فيمكن أن يستخدم ال Subgraph معالجات الاتصال والحظر لتشغيل الفهرسة. وهذا غير موصى به لأن الأداء سيكون أبطأ بشكل ملحوظ. -### 5. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. هل من الممكن نشر Subgraph واحد تحمل نفس الاسم لشبكات متعددة؟ You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) From 90390cf2a9502ff95b69af7a60bc2fe779619896 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 03:38:14 -0500 Subject: [PATCH 305/432] New translations subgraph-studio.mdx (Arabic) --- pages/ar/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/studio/subgraph-studio.mdx b/pages/ar/studio/subgraph-studio.mdx index aba3e70756e9..d4e82eeef02e 100644 --- a/pages/ar/studio/subgraph-studio.mdx +++ b/pages/ar/studio/subgraph-studio.mdx @@ -47,7 +47,7 @@ The Graph Network is not yet able to support all of the data-sources & features - Index mainnet Ethereum - Must not use any of the following features: - ipfs.cat & ipfs.map - - Non-fatal errors + - أخطاء غير فادحة - Grafting More features & networks will be added to The Graph Network incrementally. From 71801a29f21718997716cb9d930807127b41f07a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 04:58:48 -0500 Subject: [PATCH 306/432] New translations create-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/create-subgraph-hosted.mdx | 40 +++++++++---------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index 080c96288f21..c95b98fdc85d 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -844,17 +844,17 @@ dataSources: ... > ** ملاحظة: ** لا تدعم شبكة Graph حتى الآن ` ipfs.cat ` و ` ipfs.map ` ، ويجب على المطورين عدم النشر الـ subgraphs للشبكة باستخدام تلك الوظيفة عبر الـ Studio. -In order to make this easy for subgraph developers, The Graph team wrote a tool for transfering files from one IPFS node to another, called [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). +من أجل تسهيل ذلك على مطوري الـ subgraph ، فريق Graph كتب أداة لنقل الملفات من عقدة IPFS إلى أخرى ، تسمى [ ipfs-sync ](https://github.com/graphprotocol/ipfs-sync). -> **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. +> **[إدارة الميزات](#experimental-features):** يجب الإعلان عن ` ipfsOnEthereumContracts ` ضمن `features` في subgraph manifest. ### أخطاء غير فادحة -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. +افتراضيا ستؤدي أخطاء الفهرسة في الـ subgraphs التي تمت مزامنتها بالفعل ، إلى فشل الـ subgraph وإيقاف المزامنة. يمكن بدلا من ذلك تكوين الـ Subgraphs لمواصلة المزامنة في حالة وجود أخطاء ، عن طريق تجاهل التغييرات التي أجراها المعالج والتي تسببت في حدوث الخطأ. يمنح هذا منشئوا الـ subgraph الوقت لتصحيح الـ subgraphs الخاصة بهم بينما يستمر تقديم الاستعلامات للكتلة الأخيرة ، على الرغم من أن النتائج قد تكون متعارضة بسبب الخطأ الذي تسبب في الخطأ. لاحظ أن بعض الأخطاء لا تزال كارثية دائما ، ولكي تكون غير فادحة ، يجب أن يُعرف الخطأ بأنه حتمي. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> ** ملاحظة: ** لا تدعم شبكة Graph حتى الآن الأخطاء غير الفادحة ، ويجب على المطورين عدم نشر الـ subgraphs على الشبكة باستخدام تلك الوظيفة عبر الـ Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +يتطلب تمكين الأخطاء غير الفادحة تعيين flag الميزة في subgraph manifest كالتالي: ```yaml specVersion: 0.0.4 @@ -864,7 +864,7 @@ features: ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +يجب أن يتضمن الاستعلام أيضا الاستعلام عن البيانات ذات التناقضات المحتملة من خلال الوسيطة ` subgraphError `. يوصى أيضا بالاستعلام عن ` _meta ` للتحقق مما إذا كان الـ subgraph قد تخطى الأخطاء ، كما في المثال: ```graphql foos(first: 100, subgraphError: allow) { @@ -876,7 +876,7 @@ _meta { } ``` -If the subgraph encounters an error that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +إذا واجه الـ subgraph خطأ فسيرجع هذا الاستعلام كلا من البيانات وخطأ الـ graphql ضمن رسالة ` "indexing_error" ` ، كما في مثال الاستجابة هذا: ```graphql "data": { @@ -898,11 +898,11 @@ If the subgraph encounters an error that query will return both the data and a g ### Grafting onto Existing Subgraphs -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. +عندما يتم نشر الـ subgraph لأول مرة ، فإنه يبدأ في فهرسة الأحداث من كتلة نشوء السلسلة المتوافقة (أو من ` startBlock ` المعرفة مع كل مصدر بيانات) في بعض الحالات ، يكون من المفيد إعادة استخدام البيانات من subgraph موجود وبدء الفهرسة من كتلة لاحقة. يسمى هذا الوضع من الفهرسة بـ _Grafting_. Grafting ، على سبيل المثال ، مفيد أثناء التطوير لتجاوز الأخطاء البسيطة بسرعة في الـ mappings ، أو للحصول مؤقتا على subgraph موجود يعمل مرة أخرى بعد فشله. -> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> ** ملاحظة: ** الـ Grafting يتطلب أن المفهرس قد فهرس الـ subgraph الأساسي. لا يوصى باستخدامه على شبكة The Graph في الوقت الحالي ، ولا ينبغي للمطورين نشر الـ subgraphs على الشبكة باستخدام تلك الوظيفة عبر الـ Studio. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: +يتم عمل Grafte لـ subgraph في الـ subgraph الأساسي عندما يحتوي الـ subgraph manifest في ` subgraph.yaml ` على كتلة ` graft ` في المستوى العلوي: ```yaml description: ... @@ -911,18 +911,18 @@ graft: block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +عندما يتم نشر subgraph يحتوي الـ manifest على كتلة ` graft ` ، فإن Graph Node سوف تنسخ بيانات ` base ` subgraph بما في ذلك الـ ` block ` المعطى ثم يتابع فهرسة الـ subgraph الجديد من تلك الكتلة. يجب أن يوجد الـ subgraph الأساسي في instance الـ Graph Node المستهدف ويجب أن يكون قد تمت فهرسته حتى الكتلة المحددة على الأقل. بسبب هذا التقييد ، يجب استخدام الـ grafting فقط أثناء التطوير أو أثناء الطوارئ لتسريع إنتاج non-grafted subgraph مكافئ. -Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. أثناء تهيئة الـ grafted subgraph ، سيقوم الـ Graph Node بتسجيل المعلومات حول أنواع الكيانات التي تم نسخها بالفعل. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: +يمكن أن يستخدم الـ grafted subgraph مخطط GraphQL غير مطابق لمخطط الـ subgraph الأساسي ، ولكنه متوافق معه. يجب أن يكون مخطط الـ subgraph صالحا في حد ذاته ولكنه قد ينحرف عن مخطط الـ subgraph الأساسي بالطرق التالية: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces +- يضيف أو يزيل أنواع الكيانات +- يزيل الصفات من أنواع الكيانات +- يضيف صفات nullable لأنواع الكيانات +- يحول صفات non-nullable إلى صفات nullable +- يضيف قيما إلى enums +- يضيف أو يزيل الواجهات - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[إدارة الميزات](#experimental-features):**يجب الإعلان عن ` التطعيم ` ضمن `features` في subgraph manifest. From 8b0d91b01a23eaedce82a00eb9790fff20a5f42d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 04:58:49 -0500 Subject: [PATCH 307/432] New translations developer-faq.mdx (Arabic) --- pages/ar/developer/developer-faq.mdx | 48 ++++++++++++++-------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/ar/developer/developer-faq.mdx b/pages/ar/developer/developer-faq.mdx index a6f67e441b2c..28da7843f168 100644 --- a/pages/ar/developer/developer-faq.mdx +++ b/pages/ar/developer/developer-faq.mdx @@ -22,49 +22,49 @@ title: الأسئلة الشائعة للمطورين ### 5. هل من الممكن نشر Subgraph واحد تحمل نفس الاسم لشبكات متعددة؟ -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +ستحتاج إلى أسماء مختلفه لشبكات متعددة. ولا يمكن أن يكون لديك Subgraph مختلف تحت نفس الاسم ، إلا أن هناك طرقًا ملائمة لأمتلاك قاعدة بيانات واحدة لشبكات متعددة. اكتشف المزيد حول هذا الأمر في وثائقنا: [ إعادة نشر ال Subgraph ](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. How are templates different from data sources? +### 6. كيف تختلف النماذج عن مصادر البيانات؟ -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +تسمح لك النماذج بإنشاء مصادر البيانات على الفور ، أثناء فهرسة ال Subgraph الخاص بك. قد يكون الأمر هو أن عقدك سينتج عنه عقود جديدة عندما يتفاعل الأشخاص معه ، وبما أنك تعرف شكل هذه العقود (ABI ، الأحداث ، إلخ) مسبقًا ، يمكنك تحديد الطريقة التي تريد فهرستها بها في النموذج ومتى يتم إنتاجها ، وسيقوم ال Subgraph الخاص بك بإنشاء مصدر بيانات ديناميكي عن طريق توفير عنوان العقد. -Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). +راجع قسم "إنشاء نموذج مصدر بيانات" في: [ نماذج مصدر البيانات ](/developer/create-subgraph-hosted#data-source-templates). -### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 7. كيف أتأكد من أنني أستخدم أحدث إصدار من graph-node لعمليات النشر المحلية الخاصة بي؟ -You can run the following command: +يمكنك تشغيل الأمر التالي: ```sh docker pull graphprotocol/graph-node:latest ``` -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +** ملاحظة: ** ستستخدم docker / docker-compose دائمًا أي إصدار من graph-node تم سحبه في المرة الأولى التي قمت بتشغيلها ، لذلك من المهم القيام بذلك للتأكد من أنك محدث بأحدث إصدار graph-node. -### 8. How do I call a contract function or access a public state variable from my subgraph mappings? +### 8. كيف يمكنني استدعاء دالة العقد أو الوصول إلى متغير الحالة العامة من Subgraph mappings الخاصة بي؟ -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). +ألقِ نظرة على حالة ` الوصول إلى العقد الذكي ` داخل القسم [ AssemblyScript API ](/developer/assemblyscript-api). -### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 9. هل من الممكن إنشاء Subgraph باستخدام`graph init` from `graph-cli`بعقدين؟ أو هل يجب علي إضافة مصدر بيانات آخر يدويًا في ` subgraph.yaml ` بعد تشغيل ` graph init `؟ -Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. +للأسف هذا غير ممكن حاليا. الغرض من ` graph init ` هو أن تكون نقطة بداية أساسية حيث يمكنك من خلالها إضافة المزيد من مصادر البيانات يدويًا. -### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? +### 10. أرغب في المساهمة أو إضافة مشكلة GitHub ، أين يمكنني العثور على مستودعات مفتوحة المصدر؟ - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 11. ما هي الطريقة الموصى بها لإنشاء معرفات "تلقائية" لكيان عند معالجة الأحداث؟ -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +إذا تم إنشاء كيان واحد فقط أثناء الحدث ولم يكن هناك أي شيء متاح بشكل أفضل ، فسيكون hash الإجراء + فهرس السجل فريدا. يمكنك تشويشها عن طريق تحويلها إلى Bytes ثم تمريرها عبر ` crypto.keccak256 ` ولكن هذا لن يجعلها فريدة من نوعها. -### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 12. عند الاستماع إلى عدة عقود ، هل من الممكن تحديد أمر العقد للاستماع إلى الأحداث؟ -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. -### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? +### 13. هل من الممكن التفريق بين الشبكات (mainnet، Kovan، Ropsten، local) من داخل معالجات الأحداث؟ -Yes. You can do this by importing `graph-ts` as per the example below: +نعم. يمكنك القيام بذلك عن طريق استيراد ` graph-ts ` كما في المثال أدناه: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,17 +73,17 @@ dataSource.network() dataSource.address() ``` -### 14. Do you support block and call handlers on Rinkeby? +### 14. هل تدعم معالجات الكتل والإستدعاء على Rinkeby؟ -On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. +في Rinkeby ، ندعم معالجات الكتل ، لكن بدون ` filter: call `. معالجات الاستدعاء غير مدعومة في الوقت الحالي. -### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? +### 15. هل يمكنني استيراد ethers.js أو مكتبات JS الأخرى إلى ال Subgraph mappings الخاصة بي؟ -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +ليس حاليًا ، حيث تتم كتابة ال mappings في AssemblyScript. أحد الحلول البديلة الممكنة لذلك هو تخزين البيانات الأولية في الكيانات وتنفيذ المنطق الذي يتطلب مكتبات JS على ال client. -### 16. Is it possible to specifying what block to start indexing on? +### 16. هل من الممكن تحديد الكتلة التي سيتم بدء الفهرسة عليها؟ -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks +نعم. يحدد ` dataSources.source.startBlock ` في ملف ` subgraph.yaml ` رقم الكتلة الذي يبدأ مصدر البيانات الفهرسة منها. في معظم الحالات نقترح استخدام الكتلة التي تم إنشاء العقد من خلالها: Start blocks ### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. From 1bbc0ca82bf6144cc3a96256255d5de53fdd1b9d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 05:55:39 -0500 Subject: [PATCH 308/432] New translations developer-faq.mdx (Arabic) --- pages/ar/developer/developer-faq.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/ar/developer/developer-faq.mdx b/pages/ar/developer/developer-faq.mdx index 28da7843f168..1f0a5cbd6f81 100644 --- a/pages/ar/developer/developer-faq.mdx +++ b/pages/ar/developer/developer-faq.mdx @@ -85,27 +85,27 @@ dataSource.address() نعم. يحدد ` dataSources.source.startBlock ` في ملف ` subgraph.yaml ` رقم الكتلة الذي يبدأ مصدر البيانات الفهرسة منها. في معظم الحالات نقترح استخدام الكتلة التي تم إنشاء العقد من خلالها: Start blocks -### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. +### 17. هل هناك بعض النصائح لتحسين أداء الفهرسة؟ تستغرق مزامنة ال subgraph وقتًا طويلاً جدًا. -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) +نعم ، يجب إلقاء نظرة على ميزة start block الاختيارية لبدء الفهرسة من الكتل التي تم نشر العقد: [ start block ](/developer/create-subgraph-hosted#start-blocks) -### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? +### 18. هل هناك طريقة للاستعلام عن ال Subgraph بشكل مباشر مباشرةً رقم الكتلة الأخير الذي تمت فهرسته؟ -Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +نعم! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. What networks are supported by The Graph? +### 19. ما هي الشبكات الذي يدعمها The Graph؟ -The graph-node supports any EVM-compatible JSON RPC API chain. +تدعم graph-node أي سلسلة API JSON RPC متوافقة مع EVM. -The Graph Network supports subgraphs indexing mainnet Ethereum: +شبكة The Graph تدعم ال subgraph وذلك لفهرسة mainnet Ethereum: - `mainnet` -In the Hosted Service, the following networks are supported: +في ال Hosted Service ، يتم دعم الشبكات التالية: - Ethereum mainnet - Kovan @@ -129,9 +129,9 @@ In the Hosted Service, the following networks are supported: - Fuse - Moonbeam - Arbitrum One -- Arbitrum Testnet (on Rinkeby) +- (Arbitrum Testnet (on Rinkeby - Optimism -- Optimism Testnet (on Kovan) +- (Optimism Testnet (on Kovan There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). From c46c4edda5dd8a38f6428fc0da2c57e010c15159 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 05:55:41 -0500 Subject: [PATCH 309/432] New translations graphql-api.mdx (Arabic) --- pages/ar/developer/graphql-api.mdx | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/ar/developer/graphql-api.mdx b/pages/ar/developer/graphql-api.mdx index 63be312d7eaa..7fec9b44fa66 100644 --- a/pages/ar/developer/graphql-api.mdx +++ b/pages/ar/developer/graphql-api.mdx @@ -2,15 +2,15 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for the Graph Protocol. +يشرح هذا الدليل GraphQL Query API المستخدمة في بروتوكول Graph. -## Queries +## الاستعلامات -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +في مخطط الـ subgraph الخاص بك ، يمكنك تعريف أنواع وتسمى `Entities`. لكل نوع من `Entity` ، سيتم إنشاء حقل `entity` و `entities` في المستوى الأعلى من نوع `Query`. لاحظ أنه لا يلزم تضمين ` query ` أعلى استعلام ` graphql ` عند استخدام The Graph. -#### Examples +#### أمثلة -Query for a single `Token` entity defined in your schema: +الاستعلام عن كيان `Token` واحد معرف في مخططك: ```graphql { @@ -21,9 +21,9 @@ Query for a single `Token` entity defined in your schema: } ``` -**Note:** When querying for a single entity, the `id` field is required and it must be a string. +** ملاحظة: ** عند الاستعلام عن كيان واحد ، فإن الحقل ` id ` يكون مطلوبا ويجب أن يكون string. -Query all `Token` entities: +الاستعلام عن جميع كيانات `Token`: ```graphql { @@ -34,9 +34,9 @@ Query all `Token` entities: } ``` -### Sorting +### الفرز -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +عند الاستعلام عن مجموعة ، يمكن استخدام البارامتر `orderBy` للترتيب حسب صفة معينة. بالإضافة إلى ذلك ، يمكن استخدام ` OrderDirection ` لتحديد اتجاه الفرز ،`asc` للترتيب التصاعدي أو `desc` للترتيب التنازلي. #### مثال @@ -49,17 +49,17 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe } ``` -### Pagination +### ترقيم الصفحات -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. +عند الاستعلام عن مجموعة ، يمكن استخدام البارامتر `first` لترقيم الصفحات من بداية المجموعة. من الجدير بالذكر أن ترتيب الفرز الافتراضي يكون حسب الـ ID بترتيب رقمي تصاعدي ، وليس حسب وقت الإنشاء. -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +علاوة على ذلك ، يمكن استخدام البارامتر ` skip ` لتخطي الكيانات وترقيم الصفحات. على سبيل المثال `first:100` يعرض أول 100 عنصر و `first:100, skip:100` يعرض 100 عنصر التالية. -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +الاستعلامات يجب أن تتجنب استخدام قيم `skip` كبيرة جدا نظرا لأنها تؤدي بشكل عام أداء ضعيفا. لجلب عدد كبير من العناصر ، من الأفضل تصفح الكيانات بناء على صفة كما هو موضح في المثال الأخير. #### مثال -Query the first 10 tokens: +استعلم عن أول 10 توكن: ```graphql { @@ -211,7 +211,7 @@ Fulltext search operators: | `<->` | `Follow by` | Specify the distance between two words. | | `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | -#### Examples +#### أمثلة Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. From 561352458b4779c2ffaa0fb385a44e6bbd1e1b2e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 07:11:45 -0500 Subject: [PATCH 310/432] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/hosted-service/deploy-subgraph-hosted.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index bdc532e205e4..4c1df5043db2 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -1,12 +1,12 @@ --- -title: Deploy a Subgraph to the Hosted Service +title: 将子图部署到托管服务上 --- -If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. +如果您尚未查看,请先查看如何编写组成 [子图清单](/developer/create-subgraph-hosted#the-subgraph-manifest) 的文件以及如何安装 [Graph CLI](https://github.com/graphprotocol/graph-cli) 为您的子图生成代码。 现在,让我们将您的子图部署到托管服务上。 -## Create a Hosted Service account +## 创建托管服务帐户 -Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. +在使用托管服务之前,请先在我们的托管服务中创建一个帐户。 为此,您将需要一个 [Github](https://github.com/) 帐户;如果您还没有,您需要先创建一个账户。 然后,导航到 [托管服务](https://thegraph.com/hosted-service/), 单击 _'使用 Github 注册'_ 按钮并完成 Github 的授权流程。 ## Store the Access Token From 9d48626496b84b7d1385f2339dc5bb93db85f86c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 08:07:57 -0500 Subject: [PATCH 311/432] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/hosted-service/deploy-subgraph-hosted.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index 4c1df5043db2..30471b974dff 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -8,17 +8,17 @@ title: 将子图部署到托管服务上 在使用托管服务之前,请先在我们的托管服务中创建一个帐户。 为此,您将需要一个 [Github](https://github.com/) 帐户;如果您还没有,您需要先创建一个账户。 然后,导航到 [托管服务](https://thegraph.com/hosted-service/), 单击 _'使用 Github 注册'_ 按钮并完成 Github 的授权流程。 -## Store the Access Token +## 存储访问令牌 -After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. +创建帐户后,导航到您的 [仪表板](https://thegraph.com/hosted-service/dashboard)。 复制仪表板上显示的访问令牌并运行 `graph auth --product hosted-service `。 这会将访问令牌存储在您的计算机上。 如果您不需要重新生成访问令牌,您就只需要这样做一次。 -## Create a Subgraph on the Hosted Service +## 在托管服务上创建子图 -Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: +在部署子图之前,您需要在 The Graph Explorer 中创建它。 转到 [仪表板](https://thegraph.com/hosted-service/dashboard) ,单击 _'添加子图'_ 按钮,并根据需要填写以下信息: -**Image** - Select an image to be used as a preview image and thumbnail for the subgraph. +**图像** - 选择要用作子图的预览图和缩略图的图像。 -**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ +**子图名称** - 子图名称连同下面将要创建的子图帐户名称,将定义用于部署和 GraphQL 端点的`account-name/subgraph-name`样式名称。 _此字段以后无法更改。_ **Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ From c52fe091ac795bf9f3c05b9818b1b4fced496e0b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 08:07:58 -0500 Subject: [PATCH 312/432] New translations query-hosted-service.mdx (Chinese Simplified) --- .../zh/hosted-service/query-hosted-service.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/hosted-service/query-hosted-service.mdx b/pages/zh/hosted-service/query-hosted-service.mdx index 731e3a3120b2..ad41c4bede90 100644 --- a/pages/zh/hosted-service/query-hosted-service.mdx +++ b/pages/zh/hosted-service/query-hosted-service.mdx @@ -1,14 +1,14 @@ --- -title: Query the Hosted Service +title: 查询托管服务 --- -With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +部署子图后,请访问[托管服务](https://thegraph.com/hosted-service/) 以打开 [GraphiQL](https://github.com/graphql/graphiql) 界面,您可以在其中通过发出查询和查看数据模式来探索已经部署的子图的 GraphQL API。 -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +下面提供了一个示例,但请参阅 [查询 API ](/developer/graphql-api) 以获取有关如何查询子图实体的完整参考。 -#### Example +#### 示例 -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +此查询列出了我们的映射创建的所有计数器。 由于我们只创建一个,结果将只包含我们的一个 `默认计数器`: ```graphql { @@ -19,10 +19,10 @@ This query lists all the counters our mapping has created. Since we only create } ``` -## Using The Hosted Service +## 使用托管服务 -The Graph Explorer and its GraphQL playground is a useful way to explore and query deployed subgraphs on the Hosted Service. +Graph Explorer 及其 GraphQL playground是探索和查询托管服务上部署的子图的有用方式。 -Some of the main features are detailed below: +下面详细介绍了一些主要功能: -![Explorer Playground](/img/explorer-playground.png) +![探索Playground](/img/explorer-playground.png) From cd09de778efcbfad03d90d76227d25667da3c0ab Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 08:07:59 -0500 Subject: [PATCH 313/432] New translations what-is-hosted-service.mdx (Chinese Simplified) --- pages/zh/hosted-service/what-is-hosted-service.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/zh/hosted-service/what-is-hosted-service.mdx b/pages/zh/hosted-service/what-is-hosted-service.mdx index 7f604c8dc31a..24d7068c1b44 100644 --- a/pages/zh/hosted-service/what-is-hosted-service.mdx +++ b/pages/zh/hosted-service/what-is-hosted-service.mdx @@ -1,8 +1,8 @@ --- -title: What is the Hosted Service? +title: 什么是托管服务? --- -This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) +本节将引导您将子图部署到 [托管服务](https://thegraph.com/hosted-service/) 提醒一下,托管服务不会很快关闭。 一旦去中心化网络达到托管服务相当的功能,我们将逐步取消托管服务。 您在托管服务上部署的子图在[此处](https://thegraph.com/hosted-service/)仍然可用。 If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. @@ -42,9 +42,9 @@ graph init --from-example --product hosted-service / The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. -## Supported Networks on the Hosted Service +## 托管服务支持的网络 -Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) +请注意托管服务支持以下网络。 [Graph Explorer](https://thegraph.com/explorer)目前不支持以太坊主网(“主网”)之外的网络。 - `mainnet` - `kovan` From 4484155ef8e7093c297f650790481b59d31781a0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 09:28:52 -0500 Subject: [PATCH 314/432] New translations create-subgraph-hosted.mdx (Spanish) --- pages/es/developer/create-subgraph-hosted.mdx | 58 ++++++++++--------- 1 file changed, 32 insertions(+), 26 deletions(-) diff --git a/pages/es/developer/create-subgraph-hosted.mdx b/pages/es/developer/create-subgraph-hosted.mdx index 9c6be1ac5bdd..2ce50dd28ab1 100644 --- a/pages/es/developer/create-subgraph-hosted.mdx +++ b/pages/es/developer/create-subgraph-hosted.mdx @@ -545,13 +545,13 @@ import { Gravatar } from '../generated/schema' La generación de código no comprueba tu código de mapeo en `src/mapping.ts`. Si quieres comprobarlo antes de intentar desplegar tu subgrafo en the Graph Explorer, puedes ejecutar `yarn build` y corregir cualquier error de sintaxis que el compilador de TypeScript pueda encontrar. -## Data Source Templates +## Plantillas de Fuentes de Datos -A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +Un patrón común en los contratos inteligentes de Ethereum es el uso de contratos de registro o fábrica, donde un contrato crea, gestiona o hace referencia a un número arbitrario de otros contratos que tienen cada uno su propio estado y eventos. Las direcciones de estos subcontratos pueden o no conocerse de antemano y muchos de estos contratos pueden crearse y/o añadirse con el tiempo. Por eso, en estos casos, es imposible definir una única fuente de datos o un número fijo de fuentes de datos y se necesita un enfoque más dinámico: _data source templates_. -### Data Source for the Main Contract +### Fuente de Datos para el Contrato Principal -First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. +En primer lugar, define una fuente de datos regular para el contrato principal. El siguiente fragmento muestra un ejemplo simplificado de fuente de datos para el contrato de fábrica de exchange [Uniswap](https://uniswap.io). Nota el handler `NewExchange(address,address)` del evento. Se emite cuando el contrato de fábrica crea un nuevo contrato de exchange en la cadena. ```yaml dataSources: @@ -576,39 +576,45 @@ dataSources: handler: handleNewExchange ``` -### Data Source Templates for Dynamically Created Contracts +### Plantillas de Fuentes de Datos para Contratos Creados Dinámicamente -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +A continuación, añade _plantillas de origen de datos_ al manifiesto. Son idénticas a las fuentes de datos normales, salvo que carecen de una dirección de contrato predefinida en `source`. Normalmente, defines un modelo para cada tipo de subcontrato gestionado o referenciado por el contrato principal. ```yaml dataSources: - kind: ethereum/contract - name: Gravity - network: dev + name: Factory + # ... other source fields for the main contract ... +templates: + - name: Exchange + kind: ethereum/contract + network: mainnet source: - address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' - abi: -Gravity + abi: Exchange mapping: kind: ethereum/events apiVersion: 0.0.6 language: wasm/assemblyscript + file: ./src/mappings/exchange.ts entities: - - Gravatar - - Transaction + - Exchange abis: - - name: Gravity - file: ./abis/Gravity.json - blockHandlers: - - handler: handleBlock - - handler: handleBlockWithCallToContract - filter: - kind: call + - name: Exchange + file: ./abis/exchange.json + eventHandlers: + - event: TokenPurchase(address,uint256,uint256) + handler: handleTokenPurchase + - event: EthPurchase(address,uint256,uint256) + handler: handleEthPurchase + - event: AddLiquidity(address,uint256,uint256) + handler: handleAddLiquidity + - event: RemoveLiquidity(address,uint256,uint256) + handler: handleRemoveLiquidity ``` -### Instantiating a Data Source Template +### Instanciación de una Plantilla de Fuente de Datos -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +En el último paso, actualiza la asignación del contrato principal para crear una instancia de fuente de datos dinámica a partir de una de las plantillas. En este ejemplo, cambiarías el mapeo del contrato principal para importar la plantilla `Exchange` y llamaría al método `Exchange.create(address)` en él para empezar a indexar el nuevo contrato de exchange. ```typescript import { Exchange } from '../generated/templates' @@ -620,13 +626,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> **Nota:** Un nuevo origen de datos sólo procesará las llamadas y los eventos del bloque en el que fue creado y todos los bloques siguientes, pero no procesará los datos históricos, es decir, los datos que están contenidos en bloques anteriores. > -> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. +> Si los bloques anteriores contienen datos relevantes para la nueva fuente de datos, lo mejor es indexar esos datos leyendo el estado actual del contrato y creando entidades que representen ese estado en el momento de crear la nueva fuente de datos. -### Data Source Context +### Contexto de la Fuente de Datos -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +Los contextos de fuentes de datos permiten pasar una configuración extra al instanciar una plantilla. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: ```typescript import { Exchange } from '../generated/templates' From b164f80a4253f142290ac1469ecbbb04a00029f6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 10:58:37 -0500 Subject: [PATCH 315/432] New translations create-subgraph-hosted.mdx (Spanish) --- pages/es/developer/create-subgraph-hosted.mdx | 76 +++++++++---------- 1 file changed, 38 insertions(+), 38 deletions(-) diff --git a/pages/es/developer/create-subgraph-hosted.mdx b/pages/es/developer/create-subgraph-hosted.mdx index 2ce50dd28ab1..ef8d0d97154a 100644 --- a/pages/es/developer/create-subgraph-hosted.mdx +++ b/pages/es/developer/create-subgraph-hosted.mdx @@ -632,7 +632,7 @@ export function handleNewExchange(event: NewExchange): void { ### Contexto de la Fuente de Datos -Los contextos de fuentes de datos permiten pasar una configuración extra al instanciar una plantilla. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +Los contextos de fuentes de datos permiten pasar una configuración extra al instanciar una plantilla. En nuestro ejemplo, digamos que los exchanges se asocian a un par de trading concreto, que se incluye en el evento `NewExchange`. Esa información se puede pasar a la fuente de datos instanciada, así: ```typescript import { Exchange } from '../generated/templates' @@ -644,7 +644,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -Inside a mapping of the `Exchange` template, the context can then be accessed: +Dentro de un mapeo de la plantilla `Exchange`, se puede acceder al contexto: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -653,11 +653,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -There are setters and getters like `setString` and `getString` for all value types. +Hay setters y getters como `setString` and `getString` para todos los tipos de valores. -## Start Blocks +## Bloques de Inicio -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +El `startBlock` es un ajuste opcional que permite definir a partir de qué bloque de la cadena comenzará a indexar la fuente de datos. Establecer el bloque inicial permite a la fuente de datos omitir potencialmente millones de bloques que son irrelevantes. Normalmente, un desarrollador de subgrafos establecerá `startBlock` al bloque en el que se creó el contrato inteligente de la fuente de datos. ```yaml dataSources: @@ -683,23 +683,23 @@ dataSources: handler: handleNewEvent ``` -> **Note:** The contract creation block can be quickly looked up on Etherscan: +> **Nota:** El bloque de creación del contrato se puede buscar rápidamente en Etherscan: > -> 1. Search for the contract by entering its address in the search bar. -> 2. Click on the creation transaction hash in the `Contract Creator` section. -> 3. Load the transaction details page where you'll find the start block for that contract. +> 1. Busca el contrato introduciendo su dirección en la barra de búsqueda. +> 2. Haz clic en el hash de la transacción de creación en la sección `Contract Creator`. +> 3. Carga la página de detalles de la transacción, donde encontrarás el bloque inicial de ese contrato. -## Call Handlers +## Handlers de Llamadas -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +Aunque los eventos proporcionan una forma eficaz de recoger los cambios relevantes en el estado de un contrato, muchos contratos evitan generar registros para optimizar los costos de gas. En estos casos, un subgrafo puede suscribirse a las llamadas realizadas al contrato de la fuente de datos. Esto se consigue definiendo los handlers de llamadas que hacen referencia a la firma de la función y al handler de mapeo que procesará las llamadas a esta función. Para procesar estas llamadas, el manejador de mapeo recibirá un `ethereum.Call` como argumento con las entradas y salidas tipificadas de la llamada. Las llamadas realizadas en cualquier profundidad de la cadena de llamadas de una transacción activarán el mapeo, permitiendo capturar la actividad con el contrato de origen de datos a través de los contratos proxy. -Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. +Los handlers de llamadas sólo se activarán en uno de estos dos casos: cuando la función especificada sea llamada por una cuenta distinta del propio contrato o cuando esté marcada como externa en Solidity y sea llamada como parte de otra función en el mismo contrato. -> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. +> **Nota:**Los handlers de llamadas no son compatibles con Rinkeby, Goerli o Ganache. Los handlers de llamadas dependen actualmente de la API de rastreo de Parity y estas redes no la admiten. -### Defining a Call Handler +### Definición de un Handler de Llamadas -To define a call handler in your manifest simply add a `callHandlers` array under the data source you would like to subscribe to. +Para definir un handler de llamadas en su manifiesto simplemente añade una array `callHandlers` bajo la fuente de datos a la que deseas suscribirte. ```yaml dataSources: @@ -724,11 +724,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +La `función` es la firma de la función normalizada por la que se filtran las llamadas. La propiedad `handler` es el nombre de la función de tu mapeo que quieres ejecutar cuando se llame a la función de destino en el contrato de origen de datos. -### Mapping Function +### Función Mapeo -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Cada handler de llamadas toma un solo parámetro que tiene un tipo correspondiente al nombre de la función llamada. En el subgrafo de ejemplo anterior, el mapeo contiene un handler para cuando la función `createGravatar` es llamada y recibe un parámetro `CreateGravatarCall` como argumento: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -743,22 +743,22 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +La función `handleCreateGravatar` toma una nueva `CreateGravatarCall` que es una subclase de `ethereum.Call`, proporcionada por `@graphprotocol/graph-ts`, que incluye las entradas y salidas tipificadas de la llamada. El tipo `CreateGravatarCall` se genera por ti cuando ejecutas `graph codegen`. -## Block Handlers +## Handlers de Bloques -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. +Además de suscribirse a eventos del contracto o llamadas a funciones, un subgrafo puede querer actualizar sus datos a medida que se añaden nuevos bloques a la cadena. Para ello, un subgrafo puede ejecutar una función después de cada bloque o después de los bloques que coincidan con un filtro predefinido. -### Supported Filters +### Filtros Admitidos ```yaml filter: kind: call ``` -_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ +_El handler definido será llamado una vez por cada bloque que contenga una llamada al contrato (fuente de datos) bajo el cual está definido el handler._ -The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. +La ausencia de un filtro para un handler de bloque asegurará que el handler sea llamado en cada bloque. Una fuente de datos sólo puede contener un handler de bloque para cada tipo de filtro. ```yaml dataSources: @@ -785,23 +785,23 @@ dataSources: kind: call ``` -### Mapping Function +### Función de Mapeo -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +La función de mapeo recibirá un `ethereum.Block` como único argumento. Al igual que las funciones de mapeo de eventos, esta función puede acceder a las entidades del subgrafo existentes en el almacén, llamar a los contratos inteligentes y crear o actualizar entidades. ```typescript import { ethereum } from '@graphprotocol/graph-ts' -export function handleBlock(block: ethereum. Block): void { +export function handleBlock(block: ethereum.Block): void { let id = block.hash.toHex() let entity = new Block(id) entity.save() } ``` -## Anonymous Events +## Eventos Anónimos -If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: +Si necesitas procesar eventos anónimos en Solidity, puedes hacerlo proporcionando el tema 0 del evento, como en el ejemplo: ```yaml eventHandlers: @@ -810,20 +810,20 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +Un evento sólo se activará cuando la firma y el tema 0 coincidan. Por defecto, `topic0` es igual al hash de la firma del evento. -## Experimental features +## Características experimentales -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Las características del subgrafo que parten de `specVersion` `0.0.4` deben declararse explícitamente en la sección `features` del nivel superior del archivo del manifiesto, utilizando su nombre `camelCase`, como se indica en la tabla siguiente: -| Feature | Name | +| Característica | Nombre | | --------------------------------------------------------- | ------------------------- | | [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | | [IPFS on Ethereum Contracts](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +Por ejemplo, si un subgrafo utiliza las características **Full-Text Search** y **Non-fatal Errors**, el campo `features` del manifiesto debería ser: ```yaml specVersion: 0.0.4 @@ -834,13 +834,13 @@ features: dataSources: ... ``` -Note that using a feature without declaring it will incur in a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +Ten en cuenta que el uso de una característica sin declararla incurrirá en un **error de validación** durante el despliegue del subgrafo, pero no se producirá ningún error si se declara una característica pero no se utiliza. -### IPFS on Ethereum Contracts +### IPFS en Contratos de Ethereum -A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. +Un caso de uso común para combinar IPFS con Ethereum es almacenar datos en IPFS que serían demasiado costosos de mantener en la cadena, y hacer referencia al hash de IPFS en los contratos de Ethereum. -Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +Dados estos hashes de IPFS, los subgrafos pueden leer los archivos correspondientes desde IPFS utilizando `ipfs.cat` y `ipfs.map`. Sin embargo, para hacer esto de forma fiable, es necesario que estos archivos estén anclados en el nodo IPFS al que se conecta the Graph Node que indexa el subgrafo. En el caso del [hosted service](https://thegraph.com/hosted-service), es [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). > **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. From 5e7de6bb21ca536bea717397395331fa6a5baca8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 10:58:38 -0500 Subject: [PATCH 316/432] New translations subgraph-studio.mdx (Spanish) --- pages/es/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/studio/subgraph-studio.mdx b/pages/es/studio/subgraph-studio.mdx index 9585d185b7f4..906211bf5062 100644 --- a/pages/es/studio/subgraph-studio.mdx +++ b/pages/es/studio/subgraph-studio.mdx @@ -36,7 +36,7 @@ The best part! The best part! When you first create a subgraph, you’ll be dire - Your Subgraph Name - Image -- Descripcion +- Descripción - Categories - Website From caac458fc40908df51cd6be42735b771858d7705 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 11:53:40 -0500 Subject: [PATCH 317/432] New translations create-subgraph-hosted.mdx (Spanish) --- pages/es/developer/create-subgraph-hosted.mdx | 48 +++++++++---------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/es/developer/create-subgraph-hosted.mdx b/pages/es/developer/create-subgraph-hosted.mdx index ef8d0d97154a..76a1af304e61 100644 --- a/pages/es/developer/create-subgraph-hosted.mdx +++ b/pages/es/developer/create-subgraph-hosted.mdx @@ -842,19 +842,19 @@ Un caso de uso común para combinar IPFS con Ethereum es almacenar datos en IPFS Dados estos hashes de IPFS, los subgrafos pueden leer los archivos correspondientes desde IPFS utilizando `ipfs.cat` y `ipfs.map`. Sin embargo, para hacer esto de forma fiable, es necesario que estos archivos estén anclados en el nodo IPFS al que se conecta the Graph Node que indexa el subgrafo. En el caso del [hosted service](https://thegraph.com/hosted-service), es [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). -> **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Nota:** The Graph Network todavía no admite `ipfs.cat` y `ipfs.map`, y los desarrolladores no deben desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. -In order to make this easy for subgraph developers, The Graph team wrote a tool for transfering files from one IPFS node to another, called [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). +Para facilitar esto a los desarrolladores de subgrafos, el equipo de The Graph escribió una herramienta para transferir archivos de un nodo IPFS a otro, llamada [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). -> **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. +> **[La Gestión de Funciones](#experimental-features):** `ipfsOnEthereumContracts` debe declararse en `funciones` en el manifiesto del subgrafo. -### Non-fatal errors +### Errores no fatales -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. +Los errores de indexación en subgrafos ya sincronizados harán que, por defecto, el subgrafo falle y deje de sincronizarse. Los subgrafos pueden ser configurados alternativamente para continuar la sincronización en presencia de errores, ignorando los cambios realizados por el handler que provocó el error. Esto da a los autores de subgrafos tiempo para corregir sus subgrafos mientras las consultas siguen siendo servidas contra el último bloque, aunque los resultados serán posiblemente inconsistentes debido al fallo que causó el error. Ten en cuenta que algunos errores siguen siendo siempre fatales, para que el error no sea fatal debe saberse que es determinista. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Nota:** The Graph Network todavía no admite errores no fatales, y los desarrolladores no deben desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +La activación de los errores no fatales requiere el establecimiento de la siguiente bandera de características en el manifiesto del subgrafo: ```yaml specVersion: 0.0.4 @@ -864,7 +864,7 @@ features: ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +La consulta también debe optar por consultar datos con posibles inconsistencias a través del argumento `subgraphError`. También se recomienda consultar `_meta` para comprobar si el subgrafo ha saltado los errores, como en el ejemplo: ```graphql foos(first: 100, subgraphError: allow) { @@ -876,7 +876,7 @@ _meta { } ``` -If the subgraph encounters an error that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +Si el subgrafo encuentra un error esa consulta devolverá tanto los datos como un error de graphql con el mensaje `"indexing_error"`, como en este ejemplo de respuesta: ```graphql "data": { @@ -896,13 +896,13 @@ If the subgraph encounters an error that query will return both the data and a g ] ``` -### Grafting onto Existing Subgraphs +### Grafting en Subgrafos Existentes -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. +Cuando un subgrafo se despliega por primera vez, comienza a indexar eventos en el bloque génesis de la cadena correspondiente (o en el `startBlock` definido con cada fuente de datos) En algunas circunstancias, es beneficioso reutilizar los datos de un subgrafo existente y comenzar a indexar en un bloque mucho más tarde. Este modo de indexación se denomina _Grafting_. El grafting es, por ejemplo, útil durante el desarrollo para superar rápidamente errores simples en los mapeos, o para hacer funcionar temporalmente un subgrafo existente después de que haya fallado. -> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Nota:** El grafting requiere que el indexador haya indexado el subgrafo base. No se recomienda en The Graph Network en este momento, y los desarrolladores no deberían desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: +Un subgrafo se injerta en un subgrafo base cuando el manifiesto del subgrafo en `subgraph.yaml` contiene un bloque `graft` en el nivel superior: ```yaml description: ... @@ -911,18 +911,18 @@ graft: block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +Cuando se despliega un subgrafo cuyo manifiesto contiene un bloque `graft`, Graph Node copiará los datos del subgrafo `base` hasta e incluyendo el `block` dado y luego continuará indexando el nuevo subgrafo a partir de ese bloque. El subgrafo base debe existir en el target de Graph Node de destino y debe haber indexado hasta al menos el bloque dado. Debido a esta restricción, el grafting sólo debería utilizarse durante el desarrollo o durante una emergencia para acelerar la producción de un subgrafo equivalente no grafted. -Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Dado que el grafting copia en lugar de indexar los datos de base, es mucho más rápido llevar el subgrafo al bloque deseado que indexar desde cero, aunque la copia inicial de los datos puede tardar varias horas en el caso de subgrafos muy grandes. Mientras se inicializa el subgrafo grafteado, the Graph Node registrará información sobre los tipos de entidad que ya han sido copiados. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: +El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al del subgrafo base, sino simplemente compatible con él. Tiene que ser un esquema de subgrafo válido por sí mismo, pero puede desviarse del esquema del subgrafo base de las siguientes maneras: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Agrega o elimina tipos de entidades +- Elimina los atributos de los tipos de entidad +- Agrega atributos anulables a los tipos de entidad +- Convierte los atributos no anulables en atributos anulables +- Añade valores a los enums +- Agrega o elimina interfaces +- Cambia para qué tipos de entidades se implementa una interfaz -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[La gestión de características](#experimental-features):** `grafting` se declara en `features` en el manifiesto del subgrafo. From 61fe484a56cefafa3e910944f84a8e575394f8fc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 11:53:41 -0500 Subject: [PATCH 318/432] New translations developer-faq.mdx (Spanish) --- pages/es/developer/developer-faq.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/developer/developer-faq.mdx b/pages/es/developer/developer-faq.mdx index cf8f19b05420..2090a8b570ba 100644 --- a/pages/es/developer/developer-faq.mdx +++ b/pages/es/developer/developer-faq.mdx @@ -1,5 +1,5 @@ --- -title: Developer FAQs +title: Preguntas Frecuentes de los Desarrolladores --- ### 1. Can I delete my subgraph? From 43bf330216ddfbf38ca18245d49b3eba36898841 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 12:49:28 -0500 Subject: [PATCH 319/432] New translations developer-faq.mdx (Spanish) --- pages/es/developer/developer-faq.mdx | 38 ++++++++++++++-------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/pages/es/developer/developer-faq.mdx b/pages/es/developer/developer-faq.mdx index 2090a8b570ba..c42022c59cfa 100644 --- a/pages/es/developer/developer-faq.mdx +++ b/pages/es/developer/developer-faq.mdx @@ -2,47 +2,47 @@ title: Preguntas Frecuentes de los Desarrolladores --- -### 1. Can I delete my subgraph? +### 1. ¿Puedo eliminar mi subgrafo? -It is not possible to delete subgraphs once they are created. +No es posible eliminar los subgrafos una vez creados. -### 2. Can I change my subgraph name? +### 2. ¿Puedo cambiar el nombre de mi subgrafo? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +No. Una vez creado un subgrafo, no se puede cambiar el nombre. Asegúrate de pensar en esto cuidadosamente antes de crear tu subgrafo para que sea fácilmente buscable e identificable por otras dapps. -### 3. Can I change the GitHub account associated with my subgraph? +### 3. ¿Puedo cambiar la cuenta de GitHub asociada a mi subgrafo? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +No. Una vez creado un subgrafo, la cuenta de GitHub asociada no puede ser modificada. Asegúrate de pensarlo bien antes de crear tu subgrafo. -### 4. Am I still able to create a subgraph if my smart contracts don't have events? +### 4. ¿Puedo crear un subgrafo si mis contratos inteligentes no tienen eventos? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. +Es muy recomendable que estructures tus contratos inteligentes para tener eventos asociados a los datos que te interesa consultar. Los handlers de eventos en el subgrafo son activados por los eventos de los contratos, y son, con mucho, la forma más rápida de recuperar datos útiles. -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. +Si los contratos con los que trabajas no contienen eventos, tu subgrafo puede utilizar handlers de llamadas y bloques para activar la indexación. Aunque esto no se recomienda, ya que el rendimiento será significativamente más lento. -### 5. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. ¿Es posible desplegar un subgrafo con el mismo nombre para varias redes? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +Necesitarás nombres distintos para varias redes. Aunque no se pueden tener diferentes subgrafos bajo el mismo nombre, hay formas convenientes de tener una sola base de código para múltiples redes. Encontrará más información al respecto en nuestra documentación: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. How are templates different from data sources? +### 6. ¿En qué se diferencian las plantillas de las fuentes de datos? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +Las plantillas permiten crear fuentes de datos sobre la marcha, mientras el subgrafo se indexa. Puede darse el caso de que tu contrato genere nuevos contratos a medida que la gente interactúe con él, y dado que conoces la forma de esos contratos (ABI, eventos, etc) por adelantado, puedes definir cómo quieres indexarlos en una plantilla y, cuando se generen, tu subgrafo creará una fuente de datos dinámica proporcionando la dirección del contrato. -Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). +Consulta la sección "Instalar un modelo de fuente de datos" en: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). -### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 7. ¿Cómo puedo asegurarme de que estoy utilizando la última versión de graph-node para mis despliegues locales? -You can run the following command: +Puede ejecutar el siguiente comando: ```sh docker pull graphprotocol/graph-node:latest ``` -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +**NOTA:** docker / docker-compose siempre utilizará la versión de graph-node que se sacó la primera vez que se ejecutó, por lo que es importante hacer esto para asegurarse de que estás al día con la última versión de graph-node. -### 8. How do I call a contract function or access a public state variable from my subgraph mappings? +### 8. ¿Cómo puedo llamar a una función de contrato o acceder a una variable de estado pública desde mis mapeos de subgrafos? -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). +Echa un vistazo al estado `Access to smart contract` dentro de la sección [AssemblyScript API](/developer/assemblyscript-api). ### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? From 9810e9487b7c45a2f99d24cd0f101570ade32e30 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 13:56:57 -0500 Subject: [PATCH 320/432] New translations developer-faq.mdx (Spanish) --- pages/es/developer/developer-faq.mdx | 78 ++++++++++++++-------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/pages/es/developer/developer-faq.mdx b/pages/es/developer/developer-faq.mdx index c42022c59cfa..ed6de912d75e 100644 --- a/pages/es/developer/developer-faq.mdx +++ b/pages/es/developer/developer-faq.mdx @@ -44,27 +44,27 @@ docker pull graphprotocol/graph-node:latest Echa un vistazo al estado `Access to smart contract` dentro de la sección [AssemblyScript API](/developer/assemblyscript-api). -### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 9. ¿Es posible configurar un subgrafo usando `graph init` desde `graph-cli` con dos contratos? ¿O debo añadir manualmente otra fuente de datos en `subgraph.yaml` después de ejecutar `graph init`? -Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. +Lamentablemente, esto no es posible en la actualidad. `graph init` está pensado como un punto de partida básico, a partir del cual puedes añadir más fuentes de datos manualmente. -### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? +### 10. Quiero contribuir o agregar una cuestión en GitHub, ¿dónde puedo encontrar los repositorios de código abierto? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 11. ¿Cuál es la forma recomendada de construir ids "autogenerados" para una entidad cuando se manejan eventos? -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +Si sólo se crea una entidad durante el evento y si no hay nada mejor disponible, entonces el hash de la transacción + el índice del registro serían únicos. Puedes ofuscar esto convirtiendo eso en Bytes y luego pasándolo por `crypto.keccak256` pero esto no lo hará más único. -### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 12. Cuando se escuchan varios contratos, ¿es posible seleccionar el orden de los contratos para escuchar los eventos? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. -### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? +### 13. ¿Es posible diferenciar entre redes (mainnet, Kovan, Ropsten, local) desde los handlers de eventos? -Yes. You can do this by importing `graph-ts` as per the example below: +Sí. Puedes hacerlo importando `graph-ts` como en el ejemplo siguiente: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,39 +73,39 @@ dataSource.network() dataSource.address() ``` -### 14. Do you support block and call handlers on Rinkeby? +### 14. ¿Apoyan el bloqueo y los handlers de llamadas en Rinkeby? -On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. +En Rinkeby apoyamos los handlers de bloque, pero sin `filter: call`. Los handlers de llamadas no son compatibles por el momento. -### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? +### 15. ¿Puedo importar ethers.js u otras bibliotecas JS en mis mapeos de subgrafos? -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Actualmente no, ya que los mapeos se escriben en AssemblyScript. Una posible solución alternativa a esto es almacenar los datos en bruto en entidades y realizar la lógica que requiere las bibliotecas JS en el cliente. -### 16. Is it possible to specifying what block to start indexing on? +### 16. ¿Es posible especificar en qué bloque se inicia la indexación? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks +Sí. `dataSources.source.startBlock` en el `subgraph.yaml` especifica el número del bloque a partir del cual la fuente de datos comienza a indexar. En la mayoría de los casos, sugerimos utilizar el bloque en el que se creó el contrato: Bloques de inicio -### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. +### 17. ¿Hay algunos consejos para aumentar el rendimiento de la indexación? Mi subgrafo está tardando mucho en sincronizarse. -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) +Sí, deberías echar un vistazo a la función opcional de inicio de bloque para comenzar la indexación desde el bloque en el que se desplegó el contrato: [Start blocks](/developer/create-subgraph-hosted#start-blocks) -### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? +### 18. ¿Hay alguna forma de consultar directamente el subgrafo para determinar cuál es el último número de bloque que ha indexado? -Yes! Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +¡Sí! Prueba el siguiente comando, sustituyendo "organization/subgraphName" por la organización bajo la que se publica y el nombre de tu subgrafo: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. What networks are supported by The Graph? +### 19. ¿Qué redes son compatibles con The Graph? -The graph-node supports any EVM-compatible JSON RPC API chain. +The Graph Node admite cualquier cadena de API JSON RPC compatible con EVM. The Graph Network admite subgrafos que indexan la red principal de Ethereum: - `mainnet` -In the Hosted Service, the following networks are supported: +En el Servicio Alojado, se admiten las siguientes redes: - Ethereum mainnet - Kovan @@ -133,40 +133,40 @@ In the Hosted Service, the following networks are supported: - Optimism - Optimism Testnet (on Kovan) -There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). +Se está trabajando en la integración de otras blockchains, puedes leer más en nuestro repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). -### 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 20. ¿Es posible duplicar un subgrupo en otra cuenta o endpoint sin volver a desplegarlo? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +Tienes que volver a desplegar el subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. -### 21. Is this possible to use Apollo Federation on top of graph-node? +### 21. ¿Es posible utilizar Apollo Federation sobre graph-node? -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. +Federation aún no es compatible, aunque queremos apoyarla en el futuro. Por el momento, algo que se puede hacer es utilizar el stitching de esquemas, ya sea en el cliente o a través de un servicio proxy. -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. ¿Existe un límite en el número de objetos que The Graph puede devolver por consulta? -By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: +Por defecto, las respuestas a las consultas están limitadas a 100 elementos por colección. Si quieres recibir más, puedes llegar hasta 1000 artículos por colección y más allá puedes paginar con: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. Si mi dapp frontend utiliza The Graph para la consulta, ¿tengo que escribir mi clave de consulta en el frontend directamente? Si pagamos tasas de consulta a los usuarios, ¿los usuarios malintencionados harán que nuestras tasas de consulta sean muy altas? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a host name, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Actualmente, el enfoque recomendado para una dapp es añadir la clave al frontend y exponerla a los usuarios finales. Dicho esto, puedes limitar esa clave a un nombre de host, como _yourdapp.io_ y subgrafo. El gateway está siendo gestionado actualmente por Edge & Node. Parte de la responsabilidad de un gateway es vigilar los comportamientos abusivos y bloquear el tráfico de los clientes maliciosos. -### 24. Where do I go to find my current subgraph on the Hosted Service? +### 24. ¿Dónde puedo encontrar mi subgrafo actual en el Servicio Alojado? -Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) +Dirígete al Servicio Alojado para encontrar los subgrafos que tú u otros desplegaron en el Servicio Alojado. Puedes encontrarlo [aquí.](https://thegraph.com/hosted-service) -### 25. Will the Hosted Service start charging query fees? +### 25. ¿Comenzará el Servicio Alojado a cobrar tasas de consulta? -The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. +The Graph nunca cobrará por el Servicio Alojado. The Graph es un protocolo descentralizado, y cobrar por un servicio centralizado no está alineado con los valores de The Graph. El Servicio Alojado siempre fue un paso temporal para ayudar a llegar a la red descentralizada. Los desarrolladores dispondrán de tiempo suficiente para migrar a la red descentralizada a medida que se sientan cómodos. -### 26. When will the Hosted Service be shut down? +### 26. ¿Cuándo se cerrará el Servicio Alojado? -If and when there are plans to do this, the community will be notified well ahead of time with considerations made for any subgraphs built on the Hosted Service. +Si y cuando se planee hacer esto, se notificará a la comunidad con suficiente antelación y se tendrán en cuenta los subgrafos construidos en el Servicio Alojado. -### 27. How do I upgrade a subgraph on mainnet? +### 27. ¿Cómo puedo actualizar un subgrafo en mainnet? -If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +Si eres un desarrollador de subgrafos, puedes actualizar una nueva versión de tus subgrafos a Studio utilizando la CLI. En ese momento será privado, pero si estás contento con él, puedes publicarlo en the Graph Explorer descentralizado. Esto creará una nueva versión de tu subgrafo que los Curadoress pueden empezar a señalar. From 963afb1e650e2348fd3a91617f8e08bc1058a042 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 13:56:58 -0500 Subject: [PATCH 321/432] New translations graphql-api.mdx (Spanish) --- pages/es/developer/graphql-api.mdx | 38 +++++++++++++++--------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/pages/es/developer/graphql-api.mdx b/pages/es/developer/graphql-api.mdx index de8892074d99..5b15f50b20f5 100644 --- a/pages/es/developer/graphql-api.mdx +++ b/pages/es/developer/graphql-api.mdx @@ -1,16 +1,16 @@ --- -title: GraphQL API +title: API GraphQL --- -This guide explains the GraphQL Query API that is used for the Graph Protocol. +Esta guía explica la API de consulta GraphQL que se utiliza para the Graph Protocol. -## Queries +## Consultas -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +En tu esquema de subgrafos defines tipos llamados `Entities`. Por cada tipo de `Entity`, se generará un campo `entity` y `entities` en el nivel superior del tipo `Query`. Ten en cuenta que no es necesario incluir `query` en la parte superior de la consulta `graphql` cuando se utiliza The Graph. -#### Examples +#### Ejemplos -Query for a single `Token` entity defined in your schema: +Consulta de una única entidad `Token` definida en tu esquema: ```graphql { @@ -21,9 +21,9 @@ Query for a single `Token` entity defined in your schema: } ``` -**Note:** When querying for a single entity, the `id` field is required and it must be a string. +**Nota:** Cuando se consulta una sola entidad, el campo `id` es obligatorio y debe ser un string. -Query all `Token` entities: +Consulta todas las entidades `Token`: ```graphql { @@ -34,9 +34,9 @@ Query all `Token` entities: } ``` -### Sorting +### Clasificación -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +Al consultar una colección, el parámetro `orderBy` puede utilizarse para ordenar por un atributo específico. Además, el `orderDirection` se puede utilizar para especificar la dirección de ordenación, `asc` para ascendente o `desc` para descendente. #### Ejemplo @@ -49,17 +49,17 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe } ``` -### Pagination +### Paginación -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. +Al consultar una colección, el parámetro `first` puede utilizarse para paginar desde el principio de la colección. Cabe destacar que el orden por defecto es por ID en orden alfanumérico ascendente, no por tiempo de creación. -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +Además, el parámetro `skip` puede utilizarse para saltar entidades y paginar. por ejemplo, `first:100` muestra las primeras 100 entidades y `first:100, skip:100` muestra las siguientes 100 entidades. -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +Las consultas deben evitar el uso de valores de `skip` muy grandes, ya que suelen tener un rendimiento deficiente. Para recuperar un gran número de elementos, es mucho mejor para paginar recorrer las entidades basándose en un atributo, como se muestra en el último ejemplo. #### Ejemplo -Query the first 10 tokens: +Consulta los primeros 10 tokens: ```graphql { @@ -70,11 +70,11 @@ Query the first 10 tokens: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Para consultar grupos de entidades en medio de una colección, el parámetro `skip` puede utilizarse junto con el parámetro `first` para omitir un número determinado de entidades empezando por el principio de la colección. #### Ejemplo -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Consulta 10 entidades `Token`, desplazadas 10 lugares desde el principio de la colección: ```graphql { @@ -87,7 +87,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Ejemplo -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +Si un cliente necesita recuperar un gran número de entidades, es mucho más eficaz basar las consultas en un atributo y filtrar por ese atributo. For example, a client would retrieve a large number of tokens using this query: ```graphql { @@ -211,7 +211,7 @@ Fulltext search operators: | `<->` | `Follow by` | Specify the distance between two words. | | `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | -#### Examples +#### Ejemplos Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. From 2b1a5df3b30603b7723af8e8d59a37d49cbde5b7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 15:18:07 -0500 Subject: [PATCH 322/432] New translations graphql-api.mdx (Spanish) --- pages/es/developer/graphql-api.mdx | 68 +++++++++++++++--------------- 1 file changed, 34 insertions(+), 34 deletions(-) diff --git a/pages/es/developer/graphql-api.mdx b/pages/es/developer/graphql-api.mdx index 5b15f50b20f5..4513e9f5c724 100644 --- a/pages/es/developer/graphql-api.mdx +++ b/pages/es/developer/graphql-api.mdx @@ -87,7 +87,7 @@ Consulta 10 entidades `Token`, desplazadas 10 lugares desde el principio de la c #### Ejemplo -Si un cliente necesita recuperar un gran número de entidades, es mucho más eficaz basar las consultas en un atributo y filtrar por ese atributo. For example, a client would retrieve a large number of tokens using this query: +Si un cliente necesita recuperar un gran número de entidades, es mucho más eficaz basar las consultas en un atributo y filtrar por ese atributo. Por ejemplo, un cliente podría recuperar un gran número de tokens utilizando esta consulta: ```graphql { @@ -100,15 +100,15 @@ Si un cliente necesita recuperar un gran número de entidades, es mucho más efi } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +La primera vez, enviaría la consulta con `lastID = ""`, y para las siguientes peticiones pondría `lastID` al atributo `id` de la última entidad de la petición anterior. Este enfoque tendrá un rendimiento significativamente mejor que el uso de valores crecientes de `skip`. -### Filtering +### Filtro -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +Puedes utilizar el parámetro `where` en tus consultas para filtrar por diferentes propiedades. Puedes filtrar por múltiples valores dentro del parámetro `where`. #### Ejemplo -Query challenges with `failed` outcome: +Desafíos de consulta con resultado `failed`: ```graphql { @@ -122,7 +122,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +Puede utilizar sufijos como `_gt`, `_lte` para la comparación de valores: #### Ejemplo @@ -136,7 +136,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: } ``` -Full list of parameter suffixes: +Lista completa de sufijos de parámetros: ```graphql _not @@ -154,15 +154,15 @@ _not_starts_with _not_ends_with ``` -Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. +Ten en cuenta que algunos sufijos sólo son compatibles con determinados tipos. Por ejemplo, `Boolean` solo admite `_not`, `_in`, y `_not_in`. -### Time-travel queries +### Consultas sobre Time-travel -You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Puedes consultar el estado de tus entidades no sólo para el último bloque, que es el predeterminado, sino también para un bloque arbitrario en el pasado. El bloque en el que debe producirse una consulta puede especificarse por su número de bloque o su hash de bloque incluyendo un argumento `block` en los campos de nivel superior de las consultas. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +El resultado de una consulta de este tipo no cambiará con el tiempo, es decir, la consulta en un determinado bloque pasado devolverá el mismo resultado sin importar cuándo se ejecute, con la excepción de que si se consulta en un bloque muy cercano al encabezado de la cadena de Ethereum, el resultado podría cambiar si ese bloque resulta no estar en la cadena principal y la cadena se reorganiza. Una vez que un bloque puede considerarse definitivo, el resultado de la consulta no cambiará. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +Ten en cuenta que la implementación está sujeta a ciertas limitaciones que podrían violar estas garantías. La implementación no siempre puede decir que un hash de bloque dado no está en la cadena principal en absoluto, o que el resultado de una consulta por hash de bloque para un bloque que no puede considerarse final todavía podría estar influenciado por una reorganización de bloque que se ejecuta simultáneamente con la consulta. No afectan a los resultados de las consultas por el hash del bloque cuando éste es definitivo y se sabe que está en la cadena principal. [ Esta cuestión](https://github.com/graphprotocol/graph-node/issues/1405) explica con detalle cuáles son estas limitaciones. #### Ejemplo @@ -178,7 +178,7 @@ Note that the current implementation is still subject to certain limitations tha } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Esta consulta devolverá las entidades `Challenge`, y sus entidades asociadas `Application`, tal y como existían directamente después de procesar el bloque número 8.000.000. #### Ejemplo @@ -194,26 +194,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Esta consulta devolverá las entidades `Challenge`, y sus entidades asociadas `Application`, tal y como existían directamente después de procesar el bloque con el hash dado. -### Fulltext Search Queries +### Consultas de Búsqueda de Texto Completo -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Los campos de consulta de búsqueda de texto completo proporcionan una API de búsqueda de texto expresiva que puede añadirse al esquema de subgrafos y personalizarse. Consulta [Definiendo los campos de búsqueda de texto completo](/developer/create-subgraph-hosted#defining-fulltext-search-fields) para añadir la búsqueda de texto completo a tu subgrafo. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Las consultas de búsqueda de texto completo tienen un campo obligatorio, `text`, para suministrar los términos de búsqueda. Hay varios operadores especiales de texto completo que se pueden utilizar en este campo de búsqueda de `text`. -Fulltext search operators: +Operadores de búsqueda de texto completo: -| Symbol | Operator | Descripción | -| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Símbolo | Operador | Descripción | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Para combinar varios términos de búsqueda en un filtro para entidades que incluyen todos los términos proporcionados | +| | | `Or` | Las consultas con varios términos de búsqueda separados por o el operador devolverá todas las entidades que coincidan con cualquiera de los términos proporcionados | +| `<->` | `Follow by` | Especifica la distancia entre dos palabras. | +| `:*` | `Prefix` | Utilice el término de búsqueda del prefijo para encontrar palabras cuyo prefijo coincida (se requieren 2 caracteres.) | #### Ejemplos -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +Utilizando el operador `or`, esta consulta filtrará las entidades del blog que tengan variaciones de "anarchism" o de "crumpet" en sus campos de texto completo. ```graphql { @@ -226,7 +226,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +El operador `follow by` especifica unas palabras a una distancia determinada en los documentos de texto completo. La siguiente consulta devolverá todos los blogs con variaciones de "decentralize" seguidas de "philosophy" ```graphql { @@ -239,7 +239,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +Combina los operadores de texto completo para crear filtros más complejos. Con un operador de búsqueda de pretexto combinado con un follow by esta consulta de ejemplo coincidirá con todas las entidades del blog con palabras que empiecen por "lou" seguidas de "music". ```graphql { @@ -252,16 +252,16 @@ Combine fulltext operators to make more complex filters. With a pretext search o } ``` -## Schema +## Esquema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +El esquema de tu fuente de datos, es decir, los tipos de entidad, los valores y las relaciones que están disponibles para la consulta, se definen a través del [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +Los esquemas de GraphQL suelen definir tipos raíz para `queries`, `subscriptions` y `mutations`. The Graph solo admite `queries`. El tipo de `Query` raíz de tu subgrafo se genera automáticamente a partir del esquema GraphQL que se incluye en el manifiesto de tu subgrafo. -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> **Nota:** Nuestra API no expone mutaciones porque se espera que los desarrolladores emitan transacciones directamente contra la blockchain subyacente desde sus aplicaciones. -### Entities +### Entidades -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Todos los tipos GraphQL con directivas `@entity` en tu esquema serán tratados como entidades y deben tener un campo `ID`. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Nota:** Actualmente, todos los tipos de tu esquema deben tener una directiva `@entity`. En el futuro, trataremos los tipos sin una directiva `@entity` como objetos de valor, pero esto todavía no está soportado. From a4f590030d89b58b0f0afaf9c21e260952cb3378 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 15:18:08 -0500 Subject: [PATCH 323/432] New translations matchstick.mdx (Spanish) --- pages/es/developer/matchstick.mdx | 52 +++++++++++++++---------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/pages/es/developer/matchstick.mdx b/pages/es/developer/matchstick.mdx index 2f4e5e972ebe..9011a75b17bc 100644 --- a/pages/es/developer/matchstick.mdx +++ b/pages/es/developer/matchstick.mdx @@ -1,16 +1,16 @@ --- -title: Unit Testing Framework +title: Marco de Unit Testing --- -Matchstick is a unit testing framework, developed by [LimeChain](https://limechain.tech/), that enables subgraph developers to test their mapping logic in a sandboxed environment and deploy their subgraphs with confidence! +Matchstick es un marco de unit testing, desarrollado por [LimeChain](https://limechain.tech/), que permite a los desarrolladores de subgrafos probar su lógica de mapeo en un entorno sandbox y desplegar sus subgrafos con confianza! -Follow the [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) to install. Now, you can move on to writing your first unit test. +Sigue la [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) para instalar. Ahora, puede pasar a escribir tu primera unit test. -## Write a Unit Test +## Escribe una Unit Test -Let's see how a simple unit test would look like, using the Gravatar [Example Subgraph](https://github.com/graphprotocol/example-subgraph). +Veamos cómo sería una unit test sencilla, utilizando el Gravatar [Example Subgraph](https://github.com/graphprotocol/example-subgraph). -Assuming we have the following handler function (along with two helper functions to make our life easier): +Suponiendo que tenemos la siguiente función handler (junto con dos funciones de ayuda para facilitarnos la vida): ```javascript export function handleNewGravatar(event: NewGravatar): void { @@ -44,13 +44,13 @@ export function createNewGravatarEvent( mockEvent.parameters ) newGravatarEvent.parameters = new Array() - let idParam = new ethereum. EventParam('id', ethereum. Value.fromI32(id)) - let addressParam = new ethereum. EventParam( + let idParam = new ethereum.EventParam('id', ethereum.Value.fromI32(id)) + let addressParam = new ethereum.EventParam( 'ownderAddress', - ethereum. Value.fromAddress(Address.fromString(ownerAddress)) + ethereum.Value.fromAddress(Address.fromString(ownerAddress)) ) - let displayNameParam = new ethereum. EventParam('displayName', ethereum. Value.fromString(displayName)) - let imageUrlParam = new ethereum. EventParam('imageUrl', ethereum. Value.fromString(imageUrl)) + let displayNameParam = new ethereum.EventParam('displayName', ethereum.Value.fromString(displayName)) + let imageUrlParam = new ethereum.EventParam('imageUrl', ethereum.Value.fromString(imageUrl)) newGravatarEvent.parameters.push(idParam) newGravatarEvent.parameters.push(addressParam) @@ -61,7 +61,7 @@ export function createNewGravatarEvent( } ``` -We first have to create a test file in our project. We have chosen the name `gravity.test.ts`. In the newly created file we need to define a function named `runTests()`. It is important that the function has that exact name. This is an example of how our tests might look like: +Primero tenemos que crear un archivo de prueba en nuestro proyecto. Hemos elegido el nombre `gravity.test.ts`. En el archivo recién creado tenemos que definir una función llamada `runTests()`. Es importante que la función tenga ese nombre exacto. Este es un ejemplo de cómo podrían ser nuestras pruebas: ```typescript import { clearStore, test, assert } from 'matchstick-as/assembly/index' @@ -95,27 +95,27 @@ export function runTests(): void { } ``` -That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: +¡Es mucho para desempacar! En primer lugar, una cosa importante a notar es que estamos importando cosas de `matchstick-as`, nuestra biblioteca de ayuda de AssemblyScript (distribuida como un módulo npm). Puedes encontrar el repositorio [aquí](https://github.com/LimeChain/matchstick-as). `matchstick-as` nos proporciona útiles métodos de prueba y también define la función `test()` que utilizaremos para construir nuestros bloques de prueba. El resto es bastante sencillo: esto es lo que ocurre: -- We're setting up our initial state and adding one custom Gravatar entity; -- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; -- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; -- We assert the state of the store. How does that work? How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; -- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. +- Estamos configurando nuestro estado inicial y añadiendo una entidad Gravatar personalizada; +- Definimos dos objetos de evento `NewGravatar` junto con sus datos, utilizando la función `createNewGravatarEvent()`; +- Estamos llamando a los métodos handlers de esos eventos - `handleNewGravatars()` y pasando la lista de nuestros eventos personalizados; +- Hacemos valer el estado del almacén. ¿Cómo funciona eso? - Pasamos una combinación única de tipo de Entidad e id. A continuación, comprobamos un campo específico de esa Entidad y afirmamos que tiene el valor que esperamos que tenga. Hacemos esto tanto para la Entidad Gravatar inicial que añadimos al almacén, como para las dos entidades Gravatar que se añaden cuando se llama a la función del handler; +- Y por último - estamos limpiando el almacén usando `clearStore()` para que nuestra próxima prueba pueda comenzar con un objeto almacén fresco y vacío. Podemos definir tantos bloques de prueba como queramos. -There we go - we've created our first test! 👏 👏 +Ya está: ¡hemos creado nuestra primera prueba! 👏 -❗ **IMPORTANT:** _In order for the tests to work, we need to export the `runTests()` function in our mappings file. It won't be used there, but the export statement has to be there so that it can get picked up by Rust later when running the tests._ +❗ **IMPORTANTE:** _ Para que las pruebas funcionen, necesitamos exportar la función `runTests()` en nuestro archivo de mapeo. No se utilizará allí, pero la declaración de exportación tiene que estar allí para que pueda ser recogida por Rust más tarde al ejecutar las pruebas._ -You can export the tests wrapper function in your mappings file like this: +Puedes exportar la función wrapper de las pruebas en tu archivo de mapeo de la siguiente manera: ``` export { runTests } from "../tests/gravity.test.ts"; ``` -❗ **IMPORTANT:** _Currently there's an issue with using Matchstick when deploying your subgraph. Please only use Matchstick for local testing, and remove/comment out this line (`export { runTests } from "../tests/gravity.test.ts"`) once you're done. We expect to resolve this issue shortly, sorry for the inconvenience!_ +❗ **IMPORTANTE:** _Actualmente hay un problema con el uso de Matchstick cuando se despliega tu subgrafo. Por favor, sólo usa Matchstick para pruebas locales, y elimina/comenta esta línea (`export { runTests } de "../tests/gravity.test.ts"`) una vez que hayas terminado. Esperamos resolver este problema en breve, ¡disculpa las molestias!_ -_If you don't remove that line, you will get the following error message when attempting to deploy your subgraph:_ +_Si no eliminas esa línea, obtendrás el siguiente mensaje de error al intentar desplegar tu subgrafo:_ ``` /... @@ -123,15 +123,15 @@ Mapping terminated before handling trigger: oneshot canceled .../ ``` -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Ahora, para ejecutar nuestras pruebas, sólo tienes que ejecutar lo siguiente en la carpeta raíz de tu subgrafo: `graph test Gravity` -And if all goes well you should be greeted with the following: +Y si todo va bien deberías ser recibido con lo siguiente: ![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) -## Common test scenarios +## Escenarios de prueba comunes ### Hydrating the store with a certain state From 74f785c6d02ce01b4519450a91ee958430c97a9e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 16:14:22 -0500 Subject: [PATCH 324/432] New translations matchstick.mdx (Spanish) --- pages/es/developer/matchstick.mdx | 50 +++++++++++++++---------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/pages/es/developer/matchstick.mdx b/pages/es/developer/matchstick.mdx index 9011a75b17bc..2cd0e327579d 100644 --- a/pages/es/developer/matchstick.mdx +++ b/pages/es/developer/matchstick.mdx @@ -129,22 +129,22 @@ Ahora, para ejecutar nuestras pruebas, sólo tienes que ejecutar lo siguiente en Y si todo va bien deberías ser recibido con lo siguiente: -![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) +![Matchstick diciendo "¡Todas las pruebas superadas!"](/img/matchstick-tests-passed.png) ## Escenarios de prueba comunes -### Hydrating the store with a certain state +### Hidratar la tienda con un cierto estado -Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: +Los usuarios pueden hidratar la tienda con un conjunto conocido de entidades. Aquí hay un ejemplo para inicializar la tienda con una entidad Gravatar: ```typescript let gravatar = new Gravatar('entryId') gravatar.save() ``` -### Calling a mapping function with an event +### Llamada a una función de mapeo con un evento -A user can create a custom event and pass it to a mapping function that is bound to the store: +Un usuario puede crear un evento personalizado y pasarlo a una función de mapeo que está vinculada a la tienda: ```typescript import { store } from 'matchstick-as/assembly/store' @@ -156,9 +156,9 @@ let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01 handleNewGravatar(newGravatarEvent) ``` -### Calling all of the mappings with event fixtures +### Llamar a todos los mapeos con fixtures de eventos -Users can call the mappings with test fixtures. +Los usuarios pueden llamar a los mapeos con fixtures de prueba. ```typescript import { NewGravatar } from '../../generated/Gravity/Gravity' @@ -180,9 +180,9 @@ export function handleNewGravatars(events: NewGravatar[]): void { } ``` -### Mocking contract calls +### Simular llamadas de contratos -Users can mock contract calls: +Los usuarios pueden simular las llamadas de los contratos: ```typescript import { addMetadata, assert, createMockedFunction, clearStore, test } from 'matchstick-as/assembly/index' @@ -199,12 +199,12 @@ createMockedFunction(contractAddress, 'gravatarToOwner', 'gravatarToOwner(uint25 let gravity = Gravity.bind(contractAddress) let result = gravity.gravatarToOwner(bigIntParam) -assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum. Value.fromAddress(result)) +assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum.Value.fromAddress(result)) ``` -As demonstrated, in order to mock a contract call and hardcore a return value, the user must provide a contract address, function name, function signature, an array of arguments, and of course - the return value. +Como se ha demostrado, para simular (mock) una llamada a un contrato y endurecer un valor de retorno, el usuario debe proporcionar una dirección de contrato, el nombre de la función, la firma de la función, una array de argumentos y, por supuesto, el valor de retorno. -Users can also mock function reverts: +Los usuarios también pueden simular las reversiones de funciones: ```typescript let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') @@ -213,9 +213,9 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri .reverts() ``` -### Asserting the state of the store +### Afirmar el estado del almacén -Users are able to assert the final (or midway) state of the store through asserting entities. In order to do this, the user has to supply an Entity type, the specific ID of an Entity, a name of a field on that Entity, and the expected value of the field. Here's a quick example: +Los usuarios pueden hacer una aserción al estado final (o intermedio) del almacén a través de entidades de aserción. Para ello, el usuario tiene que suministrar un tipo de Entidad, el ID específico de una Entidad, el nombre de un campo en esa Entidad y el valor esperado del campo. Aquí hay un ejemplo rápido: ```typescript import { assert } from 'matchstick-as/assembly/index' @@ -227,11 +227,11 @@ gravatar.save() assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') ``` -Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. +Al ejecutar la función assert.fieldEquals() se comprobará la igualdad del campo dado con el valor esperado dado. La prueba fallará y se emitirá un mensaje de error si los valores son **NO** iguales. En caso contrario, la prueba pasará con éxito. -### Interacting with Event metadata +### Interacción con los metadatos de los Eventos -Users can use default transaction metadata, which could be returned as an ethereum. Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: +Los usuarios pueden utilizar los metadatos de la transacción por defecto, que podrían ser devueltos como un ethereum.Event utilizando la función `newMockEvent()`. El siguiente ejemplo muestra cómo se puede leer/escribir en esos campos del objeto Evento: ```typescript // Read @@ -242,26 +242,26 @@ let UPDATED_ADDRESS = '0xB16081F360e3847006dB660bae1c6d1b2e17eC2A' newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) ``` -### Asserting variable equality +### Afirmar la igualdad de las variables ```typescript -assert.equals(ethereum.Value.fromString("hello"); ethereum. Value.fromString("hello")); +assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); ``` -### Asserting that an Entity is **not** in the store +### Afirmar que una Entidad es **no** en el almacén -Users can assert that an entity does not exist in the store. The function takes an entity type and an id. If the entity is in fact in the store, the test will fail with a relevant error message. Here's a quick example of how to use this functionality: +Los usuarios pueden afirmar que una entidad no existe en el almacén. La función toma un tipo de entidad y un id. Si la entidad está de hecho en el almacén, la prueba fallará con un mensaje de error relevante. Aquí hay un ejemplo rápido de cómo utilizar esta funcionalidad: ```typescript assert.notInStore('Gravatar', '23') ``` -### Test run time duration in the log output +### Duración del tiempo de ejecución de la prueba en la salida del registro -The log output includes the test run duration. Here's an example: +La salida del registro incluye la duración de la prueba. Aquí hay un ejemplo: `Jul 09 14:54:42.420 INFO Program execution time: 10.06022ms` -## Feedback +## Comentarios -If you have any questions, feedback, feature requests or just want to reach out, the best place would be The Graph Discord where we have a dedicated channel for Matchstick, called 🔥| unit-testing. +Si tienes alguna pregunta, comentario, petición de características o simplemente quieres ponerte en contacto, el mejor lugar sería The Graph Discord, donde tenemos un canal dedicado a Matchstick, llamado 🔥| unit-testing. From fd0a14109ce63abe103337f52ce0b1f5a42ce31f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 16:14:23 -0500 Subject: [PATCH 325/432] New translations query-hosted-service.mdx (Spanish) --- pages/es/hosted-service/query-hosted-service.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/es/hosted-service/query-hosted-service.mdx b/pages/es/hosted-service/query-hosted-service.mdx index d1176977e93e..cdb6bf9f8135 100644 --- a/pages/es/hosted-service/query-hosted-service.mdx +++ b/pages/es/hosted-service/query-hosted-service.mdx @@ -1,8 +1,8 @@ --- -title: Query the Hosted Service +title: Consultas en el Sistema Alojado --- -With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +Con el subgrafo desplegado, visita el [Servicio alojado](https://thegraph.com/hosted-service/) para abrir una interfaz [GraphiQL](https://github.com/graphql/graphiql) donde puedes explorar la API GraphQL desplegada para el subgrafo emitiendo consultas y viendo el esquema. A continuación se proporciona un ejemplo, pero por favor, consulta la [Query API](/developer/graphql-api) para obtener una referencia completa sobre cómo consultar las entidades del subgrafo. @@ -19,10 +19,10 @@ Estas listas de consultas muestran todos los contadores que nuestro mapeo ha cre } ``` -## Using The Hosted Service +## Utilización del Servicio Alojado -The Graph Explorer and its GraphQL playground is a useful way to explore and query deployed subgraphs on the Hosted Service. +The Graph Explorer y su playground GraphQL es una forma útil de explorar y consultar los subgrafos desplegados en el Servicio Alojado. -Some of the main features are detailed below: +A continuación se detallan algunas de las principales características: -![Explorer Playground](/img/explorer-playground.png) +![Explora el Playground](/img/explorer-playground.png) From 59a7252d865a07d86e2a07855a5519c5bfc68386 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 20:37:36 -0500 Subject: [PATCH 326/432] New translations assemblyscript-migration-guide.mdx (Japanese) --- .../assemblyscript-migration-guide.mdx | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/ja/developer/assemblyscript-migration-guide.mdx b/pages/ja/developer/assemblyscript-migration-guide.mdx index 922351f8cb2b..3bd8d349a9e3 100644 --- a/pages/ja/developer/assemblyscript-migration-guide.mdx +++ b/pages/ja/developer/assemblyscript-migration-guide.mdx @@ -1,18 +1,18 @@ --- -title: AssemblyScript Migration Guide +title: AssemblyScript マイグレーションガイド --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +これまでサブグラフは、[AssemblyScriptの最初のバージョン](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6)を使用していました。 ついに[最新のバージョン](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10)(v0.19.10) のサポートを追加しました! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +これにより、サブグラフの開発者は、AS言語と標準ライブラリの新しい機能を使用できるようになります。 -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +このガイドは、バージョン`0.22.0`以下の`graph-cli`/`graph-ts` をお使いの方に適用されます。 もしあなたがすでにそれ以上のバージョンにいるなら、あなたはすでに AssemblyScript のバージョン`0.19.10` を使っています。 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> 注:`0.24.0`以降、`graph-node`はサブグラフマニフェストで指定された`apiVersion`に応じて、両方のバージョンをサポートしています。 -## Features +## 特徴 -### New functionality +### 新機能 - `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) - New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) @@ -30,21 +30,21 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` - Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) - Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### Optimizations +### 最適化 - `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) - Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) - Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### Other +### その他 - The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -## How to upgrade? +## アップグレードの方法 -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. `subgraph.yaml`のマッピングの`apiVersion`を`0.0.6`に変更してください。 ```yaml ... @@ -55,7 +55,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. 使用している`graph-cli`を`最新版`に更新するには、次のように実行します。 ```bash # if you have it globally installed @@ -472,7 +472,7 @@ type MyEntity @entity { This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). -### Other +### その他 - Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) From a2970ad052e051fb20e5c3b027fcb2ba67a4040f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Wed, 26 Jan 2022 21:41:19 -0500 Subject: [PATCH 327/432] New translations assemblyscript-migration-guide.mdx (Japanese) --- .../assemblyscript-migration-guide.mdx | 44 +++++++++---------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/pages/ja/developer/assemblyscript-migration-guide.mdx b/pages/ja/developer/assemblyscript-migration-guide.mdx index 3bd8d349a9e3..e37bad9539d6 100644 --- a/pages/ja/developer/assemblyscript-migration-guide.mdx +++ b/pages/ja/developer/assemblyscript-migration-guide.mdx @@ -65,20 +65,20 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. `graph-ts`についても同様ですが、グローバルにインストールするのではなく、メインの依存関係に保存します。 ```bash npm install --save @graphprotocol/graph-ts@latest ``` -4. Follow the rest of the guide to fix the language breaking changes. -5. Run `codegen` and `deploy` again. +4. ガイドの残りの部分に従って、言語の変更を修正します。 +5. `codegen`を実行し、再度`deploy`します。 -## Breaking changes +## 変更点 ### Nullability -On the older version of AssemblyScript, you could create code like this: +古いバージョンのAssemblyScriptでは、以下のようなコードを作ることができました: ```typescript function load(): Value | null { ... } @@ -87,7 +87,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -However on the newer version, because the value is nullable, it requires you to check, like this: +しかし、新しいバージョンでは、値がnullableであるため、次のようにチェックする必要があります: ```typescript let maybeValue = load() @@ -97,7 +97,7 @@ if (maybeValue) { } ``` -Or force it like this: +あるいは、次のように強制します: ```typescript let maybeValue = load()! // breaks in runtime if value is null @@ -105,11 +105,11 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +どちらを選択すべきか迷った場合は、常に安全なバージョンを使用することをお勧めします。 値が存在しない場合は、サブグラフハンドラの中でreturnを伴う初期のif文を実行するとよいでしょう。 -### Variable Shadowing +### 変数シャドウイング -Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: +以前は、[変数のシャドウイング](https://en.wikipedia.org/wiki/Variable_shadowing)を行うことができ、次のようなコードが動作していました。 ```typescript let a = 10 @@ -117,7 +117,7 @@ let b = 20 let a = a + b ``` -However now this isn't possible anymore, and the compiler returns this error: +しかし、現在はこれができなくなり、コンパイラは次のようなエラーを返します。 ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -126,9 +126,9 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` -You'll need to rename your duplicate variables if you had variable shadowing. -### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +変数シャドウイングを行っていた場合は、重複する変数の名前を変更する必要があります。 +### Null比較 +サブグラフのアップグレードを行うと、時々以下のようなエラーが発生することがあります。 ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -136,7 +136,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +解決するには、 `if` 文を以下のように変更するだけです。 ```typescript if (!decimals) { @@ -146,23 +146,23 @@ To solve you can simply change the `if` statement to something like this: if (decimals === null) { ``` -The same applies if you're doing != instead of ==. +これは、==ではなく!=の場合も同様です。 -### Casting +### キャスト -The common way to do casting before was to just use the `as` keyword, like this: +以前の一般的なキャストの方法は、次のように`as`キーワードを使うだけでした。 ```typescript let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -However this only works in two scenarios: +しかし、これは2つのシナリオでしか機能しません。 -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); -- Upcasting on class inheritance (subclass → superclass) +- プリミティブなキャスト(between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- クラス継承のアップキャスティング(サブクラス→スーパークラス) -Examples: +例 ```typescript // primitive casting From 32bd595bc05662ee11cbf84e96916689c8655aef Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 00:17:17 -0500 Subject: [PATCH 328/432] New translations assemblyscript-migration-guide.mdx (Japanese) --- .../assemblyscript-migration-guide.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ja/developer/assemblyscript-migration-guide.mdx b/pages/ja/developer/assemblyscript-migration-guide.mdx index e37bad9539d6..7e178b00504b 100644 --- a/pages/ja/developer/assemblyscript-migration-guide.mdx +++ b/pages/ja/developer/assemblyscript-migration-guide.mdx @@ -178,10 +178,10 @@ class Bytes extends Uint8Array {} let bytes = new Bytes(2) < Uint8Array > bytes // same as: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +キャストしたくても、`as`/`var`を使うと**安全ではない**というシナリオが2つあります。 -- Downcasting on class inheritance (superclass → subclass) -- Between two types that share a superclass +- クラス継承のダウンキャスト(スーパークラス → サブクラス) +- スーパークラスを共有する2つの型の間 ```typescript // downcasting on class inheritance @@ -198,7 +198,7 @@ class ByteArray extends Uint8Array {} let bytes = new Bytes(2) < ByteArray > bytes // breaks in runtime :( ``` -For those cases, you can use the `changetype` function: +このような場合には、`changetype`関数を使用します。 ```typescript // downcasting on class inheritance @@ -217,7 +217,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. +単にnull性を除去したいだけなら、`as` 演算子(or `variable`)を使い続けることができますが、値がnullではないことを確認しておかないと壊れてしまいます。 ```typescript // remove nullability @@ -230,18 +230,18 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +Nullabilityについては、[nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks)を利用することをお勧めします。 -Also we've added a few more static methods in some types to ease casting, they are: +また、キャストを容易にするために、いくつかの型にスタティックメソッドを追加しました。 - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### Nullability check with property access +### プロパティアクセスによるNullabilityチェック -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +[nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks)を使用するには、次のように`if`文や三項演算子(`?` and `:`) を使用します。 ```typescript let something: string | null = 'data' @@ -259,7 +259,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +しかし、これは、以下のように、プロパティのアクセスではなく、変数に対して`if`/ternaryを行っている場合にのみ機能します。 ```typescript class Container { @@ -272,7 +272,7 @@ container.data = 'data' let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile ``` -Which outputs this error: +すると、このようなエラーが出力されます。 ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. From cecbdf8caef54e1462ff43dbc02d517db6075a5f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 01:13:23 -0500 Subject: [PATCH 329/432] New translations assemblyscript-migration-guide.mdx (Japanese) --- .../assemblyscript-migration-guide.mdx | 40 +++++++++---------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/pages/ja/developer/assemblyscript-migration-guide.mdx b/pages/ja/developer/assemblyscript-migration-guide.mdx index 7e178b00504b..689abb304dfc 100644 --- a/pages/ja/developer/assemblyscript-migration-guide.mdx +++ b/pages/ja/developer/assemblyscript-migration-guide.mdx @@ -217,7 +217,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -単にnull性を除去したいだけなら、`as` 演算子(or `variable`)を使い続けることができますが、値がnullではないことを確認しておかないと壊れてしまいます。 +単にnull性を除去したいだけなら、`as` オペレーター(or `variable`)を使い続けることができますが、値がnullではないことを確認しておかないと壊れてしまいます。 ```typescript // remove nullability @@ -280,7 +280,7 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` -To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: +この問題を解決するには、そのプロパティアクセスのための変数を作成して、コンパイラがnullability checkのマジックを行うようにします。 ```typescript class Container { @@ -295,9 +295,9 @@ let data = container.data let somethingOrElse: string = data ? data : 'else' // compiles just fine :) ``` -### Operator overloading with property access +### プロパティアクセスによるオペレーターオーバーロード -If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. +アセンブリスクリプトのコンパイラは、値の片方がnullableであることを警告するコンパイル時のエラーを出さずに、ただ黙ってコンパイルするので、実行時にコードが壊れる可能性があります。 ```typescript class BigInt extends Uint8Array { @@ -321,7 +321,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +この件に関して、アセンブリ・スクリプト・コンパイラーに問題を提起しましたが、 今のところ、もしサブグラフ・マッピングでこの種の操作を行う場合には、 その前にNULLチェックを行うように変更してください。 ```typescript let wrapper = new Wrapper(y) @@ -333,9 +333,9 @@ if (!wrapper.n) { wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt ``` -### Value initialization +### 値の初期化 -If you have any code like this: +もし、このようなコードがあった場合: ```typescript var value: Type // null @@ -343,7 +343,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +これは、値が初期化されていないために起こります。したがって、次のようにサブグラフが値を初期化していることを確認してください。 ```typescript var value = new Type() // initialized @@ -351,7 +351,7 @@ value.x = 10 value.y = 'content' ``` -Also if you have nullable properties in a GraphQL entity, like this: +また、以下のようにGraphQLのエンティティにNullableなプロパティがある場合も同様です。 ```graphql type Total @entity { @@ -360,7 +360,7 @@ type Total @entity { } ``` -And you have code similar to this: +そして、以下のようなコードになります: ```typescript let total = Total.load('latest') @@ -372,7 +372,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +`total.amount`の値を確実に初期化する必要があります。なぜなら、最後の行のsumのようにアクセスしようとすると、クラッシュしてしまうからです。 そのため、最初に初期化する必要があります。 ```typescript let total = Total.load('latest') @@ -385,7 +385,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +あるいは、このプロパティに nullable 型を使用しないように GraphQL スキーマを変更することもできます。そうすれば、`コード生成`の段階でゼロとして初期化されます。 ```graphql type Total @entity { @@ -404,9 +404,9 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -### Class property initialization +### クラスのプロパティの初期化 -If you export any classes with properties that are other classes (declared by you or by the standard library) like this: +以下のように、他のクラス(自分で宣言したものや標準ライブラリで宣言したもの)のプロパティを持つクラスをエクスポートした場合、そのクラスのプロパティを初期化します: ```typescript class Thing {} @@ -416,7 +416,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +コンパイラがエラーになるのは、クラスであるプロパティにイニシャライザを追加するか、`!` オペレーターを追加する必要があるからです。 ```typescript export class Something { @@ -440,11 +440,11 @@ export class Something { } ``` -### GraphQL schema +### GraphQLスキーマ -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +これはAssemblyScriptの直接的な変更ではありませんが、`schema.graphql`ファイルを更新する必要があるかもしれません。 -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: +タイプの中にNon-Nullable Listのフィールドを定義することができなくなりました。 次のようなスキーマを持っているとします。 ```graphql type Something @entity { @@ -457,7 +457,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +Listタイプのメンバーには、以下のように`!` を付ける必要があります。 ```graphql type Something @entity { @@ -470,7 +470,7 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +これはAssemblyScriptのバージョンによるnullabilityの違いから変更されたもので、`src/generated/schema.ts`ファイル(デフォルトのパス、あなたはこれを変更したかもしれません)に関連しています。 ### その他 From 612fb85aa2442af3b3ee9304ececbc458ec4f601 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 01:13:25 -0500 Subject: [PATCH 330/432] New translations graphql-api.mdx (Arabic) --- pages/ar/developer/graphql-api.mdx | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/ar/developer/graphql-api.mdx b/pages/ar/developer/graphql-api.mdx index 7fec9b44fa66..6b2895103f0c 100644 --- a/pages/ar/developer/graphql-api.mdx +++ b/pages/ar/developer/graphql-api.mdx @@ -70,11 +70,11 @@ title: GraphQL API } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +للاستعلام عن مجموعات الكيانات في منتصف المجموعة ، يمكن استخدام البارامتر `skip` بالاصافة لبارامتر `first` لتخطي عدد محدد من الكيانات بدءا من بداية المجموعة. #### مثال -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +الاستعلام عن 10 كيانات `Token` ،بإزاحة 10 أماكن من بداية المجموعة: ```graphql { @@ -87,7 +87,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### مثال -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +إذا احتاج العميل إلى جلب عدد كبير من الكيانات ، فمن الأفضل أن تستند الاستعلامات إلى إحدى الصفات والفلترة حسب تلك الصفة. على سبيل المثال ، قد يجلب العميل عددا كبيرا من التوكن باستخدام هذا الاستعلام: ```graphql { @@ -100,15 +100,15 @@ If a client needs to retrieve a large number of entities, it is much more perfor } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +في المرة الأولى ، سيتم إرسال الاستعلام مع `lastID = ""` ، وبالنسبة للطلبات اللاحقة ، سيتم تعيين `lastID` إلى صفة `id` للكيان الأخير في الطلب السابق. أداء هذا الأسلوب أفضل بكثير من استخدام زيادة قيم `skip`. -### Filtering +### الفلترة -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +يمكنك استخدام البارامتر `where` في الاستعلام لتصفية الخصائص المختلفة. يمكنك الفلترة على قيم متعددة ضمن البارامتر `where`. #### مثال -Query challenges with `failed` outcome: +تحديات الاسعلام مع نتيجة `failed`: ```graphql { @@ -122,7 +122,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +يمكنك استخدام لواحق مثل ` _gt ` ، ` _lte ` لمقارنة القيم: #### مثال @@ -136,7 +136,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: } ``` -Full list of parameter suffixes: +القائمة الكاملة للواحق البارامترات: ```graphql _not @@ -154,15 +154,15 @@ _not_starts_with _not_ends_with ``` -Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. +يرجى ملاحظة أن بعض اللواحق مدعومة فقط لأنواع معينة. على سبيل المثال ، ` Boolean ` يدعم فقط ` _not ` و ` _in ` و ` _not_in `. ### Time-travel queries -You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +يمكنك الاستعلام عن حالة الكيانات الخاصة بك ليس فقط للكتلة الأخيرة ، والتي هي افتراضيا ، ولكن أيضا لكتلة اعتباطية في الماضي. يمكن تحديد الكتلة التي يجب أن يحدث فيها الاستعلام إما عن طريق رقم الكتلة أو hash الكتلة الخاص بها عن طريق تضمين وسيطة ` block ` في حقول المستوى الأعلى للاستعلامات. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +لن تتغير نتيجة مثل هذا الاستعلام بمرور الوقت ، أي أن الاستعلام في كتلة سابقة معينة سيعيد نفس النتيجة بغض النظر عن وقت تنفيذها ، باستثناء أنه إذا قمت بالاستعلام في كتلة قريبة جدا من رأس سلسلة Ethereum ، قد تتغير النتيجة إذا تبين أن هذه الكتلة ليست في السلسلة الرئيسية وتمت إعادة تنظيم السلسلة. بمجرد اعتبار الكتلة نهائية ، لن تتغير نتيجة الاستعلام. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +لاحظ أن التنفيذ الحالي لا يزال يخضع لقيود معينة قد تنتهك هذه الضمانات. لا يمكن للتنفيذ دائما أن يخبرنا أن hash كتلة معينة ليست في السلسلة الرئيسية ، أو أن نتيجة استعلام لكتلة عن طريق hash الكتلة لا يمكن اعتبارها نهائية ومع ذلك قد تتأثر بإعادة تنظيم الكتلة التي تعمل بشكل متزامن مع الاستعلام. لا تؤثر نتائج الاستعلامات عن طريق hash الكتلة عندما تكون الكتلة نهائية ومعروفة بأنها موجودة في السلسلة الرئيسية. [ تشرح هذه المشكلة ](https://github.com/graphprotocol/graph-node/issues/1405) ماهية هذه القيود بالتفصيل. #### مثال From 5c0bac0fc9a8779aaa6f5cf727fea878d5121953 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 01:13:27 -0500 Subject: [PATCH 331/432] New translations publish-subgraph.mdx (Japanese) --- pages/ja/developer/publish-subgraph.mdx | 26 ++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/ja/developer/publish-subgraph.mdx b/pages/ja/developer/publish-subgraph.mdx index 2f35f5eb1bae..e2458c5412d8 100644 --- a/pages/ja/developer/publish-subgraph.mdx +++ b/pages/ja/developer/publish-subgraph.mdx @@ -1,27 +1,27 @@ --- -title: Publish a Subgraph to the Decentralized Network +title: 分散型ネットワークへのサブグラフの公開 --- -Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. +サブグラフが [Subgraph Studioにデプロイ](/studio/deploy-subgraph-studio)され、それをテストし、本番の準備ができたら、分散型ネットワークにパブリッシュすることができます。 -Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. +サブグラフを分散型ネットワークに公開すると、[キュレーター](/curating)がキュレーションを開始したり、[インデクサー](/indexing)がインデックスを作成したりできるようになります。 -For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). +分散型ネットワークにサブグラフを公開する方法については、[こちらのビデオ](https://youtu.be/HfDgC2oNnwo?t=580)をご覧ください。 -### Networks +### ネットワーク -The decentralized network currently supports both Rinkeby and Ethereum Mainnet. +分散型ネットワークは現在、RinkebyとEthereum Mainnetの両方をサポートしています。 -### Publishing a subgraph +### サブグラフの公開 -Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). +サブグラフは、Subgraph Studioのダッシュボードから**Publish** ボタンをクリックすることで、直接分散型ネットワークに公開することができます。 サブグラフが公開されると、[Graph Explorer](https://thegraph.com/explorer/)で閲覧できるようになります。 -- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. +- Rinkebyに公開されたサブグラフは、RinkebyネットワークまたはEthereum Mainnetのいずれかからデータをインデックス化してクエリすることができます。 -- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. +- Ethereum Mainnetに公開されたサブグラフは、Ethereum Mainnetのデータのみをインデックス化してクエリすることができます。つまり、テストネットのデータをインデックス化して照会するサブグラフをメインの分散型ネットワークに公開することはできません。 -- When publishing a new version for an existing subgraph the same rules apply as above. +- 既存のサブグラフの新バージョンを公開する場合は、上記と同じルールが適用されます。 -### Updating metadata for a published subgraph +### 公開されたサブグラフのメタデータの更新 -Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. +サブグラフが分散型ネットワークに公開されると、サブグラフのSubgraph Studioダッシュボードで更新を行うことにより、いつでもメタデータを変更することができます。 変更を保存し、更新内容をネットワークに公開すると、グラフエクスプローラーに反映されます。 デプロイメントが変更されていないため、新しいバージョンは作成されません。 From 2c881c4734bd686c41148603fa99fb83ae9ef396 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 01:13:28 -0500 Subject: [PATCH 332/432] New translations query-the-graph.mdx (Japanese) --- pages/ja/developer/query-the-graph.mdx | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/pages/ja/developer/query-the-graph.mdx b/pages/ja/developer/query-the-graph.mdx index ae480b1e6883..0a90f97c368b 100644 --- a/pages/ja/developer/query-the-graph.mdx +++ b/pages/ja/developer/query-the-graph.mdx @@ -1,32 +1,31 @@ --- -title: Query The Graph +title: グラフのクエリ --- -With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +サブグラフがデプロイされた状態で、[Graph Explorer](https://thegraph.com/explorer)にアクセスすると、[GraphiQL](https://github.com/graphql/graphiql)インターフェースが表示され、サブグラフにデプロイされたGraphQL APIを探索して、クエリを発行したり、スキーマを表示したりすることができます。 -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +以下に例を示しますが、サブグラフのエンティティへのクエリの方法については、[Query API](/developer/graphql-api)を参照してください。 -#### Example +#### 例 -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +このクエリは、マッピングが作成したすべてのカウンターを一覧表示します。 作成するのは1つだけなので、結果には1つの`デフォルトカウンター -```graphql -{ +
{
   counters {
     id
     value
   }
 }
-```
+`
-## Using The Graph Explorer +## グラフエクスプローラの利用 -Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. +分散型グラフエクスプローラに公開されているサブグラフには、それぞれ固有のクエリURLが設定されており、サブグラフの詳細ページに移動し、右上の「クエリ」ボタンをクリックすることで確認できます。 これは、サブグラフの詳細ページに移動し、右上の「クエリ」ボタンをクリックすると、サブグラフの固有のクエリURLと、そのクエリの方法を示すサイドペインが表示されます。 ![Query Subgraph Pane](/img/query-subgraph-pane.png) -As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). +お気づきのように、このクエリURLには固有のAPIキーを使用する必要があります。 APIキーの作成と管理は、[Subgraph Studio](https://thegraph.com/studio)の「API Keys」セクションで行うことができます。 Subgraph Studioの使用方法については、[こちら](/studio/subgraph-studio)をご覧ください。 -Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). +API キーを使用してサブグラフをクエリすると、GRT で支払われるクエリ料金が発生します。 課金については[こちら](/studio/billing)をご覧ください。 -You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. +また、「プレイグラウンド」タブのGraphQLプレイグラウンドを使用して、The Graph Explorer内のサブグラフに問い合わせを行うことができます。 From 07d59050dc4d7fd1a2d5000590337ced1529e86b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 01:13:29 -0500 Subject: [PATCH 333/432] New translations querying-from-your-app.mdx (Japanese) --- pages/ja/developer/querying-from-your-app.mdx | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/pages/ja/developer/querying-from-your-app.mdx b/pages/ja/developer/querying-from-your-app.mdx index c09c44efee72..e94a6f50046e 100644 --- a/pages/ja/developer/querying-from-your-app.mdx +++ b/pages/ja/developer/querying-from-your-app.mdx @@ -1,40 +1,40 @@ --- -title: Querying from an Application +title: アプリケーションからのクエリ --- -Once a subgraph is deployed to the Subgraph Studio or to the Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: +サブグラフがSubgraph StudioまたはGraph Explorerにデプロイされると、GraphQL APIのエンドポイントが与えられ、以下のような形になります。 -**Subgraph Studio (testing endpoint)** +**Subgraph Studio (テスト用エンドポイント)** ```sh Queries (HTTP) https://api.studio.thegraph.com/query/// ``` -**Graph Explorer** +**グラフエクスプローラ** ```sh Queries (HTTP) https://gateway.thegraph.com/api//subgraphs/id/ ``` -Using the GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with the data indexed by the subgraph. +GraphQLエンドポイントを使用すると、さまざまなGraphQLクライアントライブラリを使用してサブグラフをクエリし、サブグラフによってインデックス化されたデータをアプリに入力することができます。 -Here are a couple of the more popular GraphQL clients in the ecosystem and how to use them: +ここでは、エコシステムで人気のあるGraphQLクライアントをいくつか紹介し、その使い方を説明します: -### Apollo client +### Apolloクライアント -[Apollo client](https://www.apollographql.com/docs/) supports web projects including frameworks like React and Vue, as well as mobile clients like iOS, Android, and React Native. +[Apolloクライアント](https://www.apollographql.com/docs/)は、ReactやVueなどのフレームワークを含むWebプロジェクトや、iOS、Android、React Nativeなどのモバイルクライアントをサポートしています。 -Let's look at how fetch data from a subgraph with Apollo client in a web project. +WebプロジェクトでApolloクライアントを使ってサブグラフからデータを取得する方法を見てみましょう。 -First, install `@apollo/client` and `graphql`: +まず、`@apollo/client`と`graphql`をインストールします: ```sh npm install @apollo/client graphql ``` -Then you can query the API with the following code: +その後、以下のコードでAPIをクエリできます: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -67,7 +67,7 @@ client }) ``` -To use variables, you can pass in a `variables` argument to the query: +変数を使うには、クエリの引数に`variables` を渡します。 ```javascript const tokensQuery = ` @@ -100,17 +100,17 @@ client ### URQL -Another option is [URQL](https://formidable.com/open-source/urql/), a somewhat lighter weight GraphQL client library. +もう一つの選択肢は[URQL](https://formidable.com/open-source/urql/)で、URQLは、やや軽量なGraphQLクライアントライブラリです。 -Let's look at how fetch data from a subgraph with URQL in a web project. +URQLは、やや軽量なGraphQLクライアントライブラリです。 -First, install `urql` and `graphql`: +WebプロジェクトでURQLを使ってサブグラフからデータを取得する方法を見てみましょう。 まず、`urql`と`graphql`をインストールします。 ```sh npm install urql graphql ``` -Then you can query the API with the following code: +その後、以下のコードでAPIをクエリできます: ```javascript import { createClient } from 'urql' From 8deaeb21af9a0bc2032595f427e647aa15287c07 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 01:13:31 -0500 Subject: [PATCH 334/432] New translations quick-start.mdx (Japanese) --- pages/ja/developer/quick-start.mdx | 96 +++++++++++++++--------------- 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/pages/ja/developer/quick-start.mdx b/pages/ja/developer/quick-start.mdx index 6893d424ddc2..023f229a1f39 100644 --- a/pages/ja/developer/quick-start.mdx +++ b/pages/ja/developer/quick-start.mdx @@ -1,17 +1,17 @@ --- -title: Quick Start +title: クイックスタート --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph on: +このガイドでは、サブグラフの初期化、作成、デプロイの方法を素早く説明します: -- **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** -- **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) +- **Subgraph Studio** - **Ethereum mainnet**をインデックスするサブグラフにのみ使用されます。 +- **Hosted Service** - Ethereumメインネット以外の **他のネットワーク**(Binance、Maticなど)にインデックスを付けるサブグラフに使用されます。 ## Subgraph Studio -### 1. Install the Graph CLI +### 1. Graph CLIのインストール -The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. +Graph CLIはJavaScriptで書かれており、使用するには `npm` または `yarn` のいずれかをインストールする必要があります。 ```sh # NPM @@ -21,51 +21,51 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. サブグラフの初期化 -- Initialize your subgraph from an existing contract. +- 既存のコントラクトからサブグラフを初期化します。 ```sh graph init --studio ``` -- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. +- サブグラフのスラッグは、サブグラフの識別子です。 CLIツールでは、以下のスクリーンショットに見られるように、コントラクトアドレス、ネットワークなど、サブグラフを作成するための手順を説明します。 ![Subgraph command](/img/Subgraph-Slug.png) -### 3. Write your Subgraph +### 3. サブグラフの作成 -The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +前述のコマンドでは、サブグラフを作成するための出発点として使用できるscaffoldサブグラフを作成します。 サブグラフに変更を加える際には、主に3つのファイルを使用します: -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. +- マニフェスト (subgraph.yaml) - マニフェストは、サブグラフがインデックスするデータソースを定義します。 +- スキーマ (schema.graphql) - GraphQLスキーマは、サブグラフから取得したいデータを定義します。 +- AssemblyScript Mappings (mapping.ts) - データソースからのデータを、スキーマで定義されたエンティティに変換するコードです。 -For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). +サブグラフの書き方の詳細については、 [Create a Subgraph](/developer/create-subgraph-hosted) を参照してください。 -### 4. Deploy to the Subgraph Studio +### 4. Subgraph Studioへのデプロイ -- Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. -- Click "Create" and enter the subgraph slug you used in step 2. -- Run these commands in the subgraph folder +- [https://thegraph.com/studio/](https://thegraph.com/studio/) にアクセスし、ウォレットを接続します。 +- 「Create」をクリックし、ステップ2で使用したサブグラフのスラッグを入力します。 +- サブグラフのフォルダで以下のコマンドを実行します。 ```sh $ graph codegen $ graph build ``` -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +- サブグラフの認証とデプロイを行います。 デプロイキーは、Subgraph StudioのSubgraphページにあります。 ```sh $ graph auth --studio $ graph deploy --studio ``` -- You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` +- バージョンラベルの入力を求められます。 バージョンラベルの命名には、以下のような規約を使用することを強くお勧めします。 例: `0.0.1`, `v1`, `version1` -### 5. Check your logs +### 5. ログの確認 -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +エラーが発生した場合は、ログを確認してください。 サブグラフが失敗している場合は、 [GraphiQL Playground](https://graphiql-online.com/) を使ってサブグラフの健全性をクエリすることができます。 [このエンドポイント](https://api.thegraph.com/index-node/graphql) を使用します。 なお、以下のクエリを活用して、サブグラフのデプロイメントIDを入力することができます。 この場合、 `Qm...` がデプロイメントIDです(これはSubgraphページの**Details**に記載されています)。 以下のクエリは、サブグラフが失敗したときに教えてくれるので、適宜デバッグすることができます: ```sh { @@ -109,15 +109,15 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. サブグラフのクエリ -You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). +[以下の手順](/developer/query-the-graph)でサブグラフのクエリを実行できます。 APIキーを持っていない場合は、開発やステージングに使用できる無料の一時的なクエリURLを使って、自分のdappからクエリを実行できます。 フロントエンドアプリケーションからサブグラフを照会する方法については、[こちら](/developer/querying-from-your-app)の説明をご覧ください。 -## Hosted Service +## ホスティングサービス -### 1. Install the Graph CLI +### 1. Graph CLIのインストール -"The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. +"Graph CLI "はnpmパッケージなので、使用するには`npm`または `yarn`がインストールされていなければなりません。 ```sh # NPM @@ -127,39 +127,39 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. サブグラフの初期化 -- Initialize your subgraph from an existing contract. +- 既存のコントラクトからサブグラフを初期化します。 ```sh $ graph init --product hosted-service --from-contract
``` -- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` +- サブグラフの名前を聞かれます。 形式は`/`です。 例:`graphprotocol/examplesubgraph` -- If you'd like to initialize from an example, run the command below: +- 例題から初期化したい場合は、以下のコマンドを実行します。 ```sh $ graph init --product hosted-service --from-example ``` -- In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- 例の場合、サブグラフはDani GrantによるGravityコントラクトに基づいており、ユーザーのアバターを管理し、アバターが作成または更新されるたびに`NewGravatar`または`UpdateGravatar`イベントを発行します。 -### 3. Write your Subgraph +### 3. サブグラフの作成 -The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: +先ほどのコマンドで、サブグラフを作成するための足場ができました。 サブグラフに変更を加える際には、主に3つのファイルを使用します: -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index -- Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema +- マニフェスト (subgraph.yaml) - マニフェストは、サブグラフがインデックスするデータソースを定義します。 +- スキーマ (schema.graphql) - GraphQLスキーマは、サブグラフから取得したいデータを定義します。 +- AssemblyScript Mappings (mapping.ts) - データソースからのデータを、スキーマで定義されたエンティティに変換するコードです。 -For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). +サブグラフの書き方の詳細については、 [Create a Subgraph](/developer/create-subgraph-hosted) を参照してください。 -### 4. Deploy your Subgraph +### 4. サブグラフのデプロイ -- Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account -- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. -- Run codegen in the subgraph folder +- Github アカウントを使用して[Hosted Service](https://thegraph.com/hosted-service/) にサインインします。 +- 「Add Subgraph」をクリックし、必要な情報を入力します。 手順2と同じサブグラフ名を使用します。 +- サブグラフのフォルダでcodegenを実行します。 ```sh # NPM @@ -169,16 +169,16 @@ $ npm run codegen $ yarn codegen ``` -- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. +- アクセストークンを追加して、サブグラフをデプロイします。 アクセストークンは、ダッシュボードのHosted Serviceにあります。 ```sh $ graph auth --product hosted-service $ graph deploy --product hosted-service / ``` -### 5. Check your logs +### 5. ログの確認 -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +エラーが発生した場合は、ログを確認してください。 サブグラフが失敗している場合は、 [GraphiQL Playground](https://graphiql-online.com/) を使ってサブグラフの健全性をクエリすることができます。 [このエンドポイント](https://api.thegraph.com/index-node/graphql) を使用します。 なお、以下のクエリを活用して、サブグラフのデプロイメントIDを入力することができます。 この場合、 `Qm...` がデプロイメントIDです(これはSubgraphページの**Details**に記載されています)。 以下のクエリは、サブグラフが失敗したときに教えてくれるので、適宜デバッグすることができます: ```sh { @@ -222,6 +222,6 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. サブグラフのクエリ -Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. +[こちらの手順](/hosted-service/query-hosted-service)に従って、ホステッドサービスでサブグラフをクエリします。 From c3fad71056303bd23407954b9086d615a857e027 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 01:13:32 -0500 Subject: [PATCH 335/432] New translations query-hosted-service.mdx (Japanese) --- pages/ja/hosted-service/query-hosted-service.mdx | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/pages/ja/hosted-service/query-hosted-service.mdx b/pages/ja/hosted-service/query-hosted-service.mdx index 731e3a3120b2..b860c58f632e 100644 --- a/pages/ja/hosted-service/query-hosted-service.mdx +++ b/pages/ja/hosted-service/query-hosted-service.mdx @@ -4,20 +4,19 @@ title: Query the Hosted Service With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +以下に例を示しますが、サブグラフのエンティティへのクエリの方法については、[Query API](/developer/graphql-api)を参照してください。 #### Example -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +このクエリは、マッピングが作成したすべてのカウンターを一覧表示します。 作成するのは1つだけなので、結果には1つの`デフォルトカウンター -```graphql -{ +
{
   counters {
     id
     value
   }
 }
-```
+`
## Using The Hosted Service From 98e3a9265defe23a2e85e42445293b940f30a00d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 01:13:33 -0500 Subject: [PATCH 336/432] New translations deploy-subgraph-studio.mdx (Japanese) --- pages/ja/studio/deploy-subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/studio/deploy-subgraph-studio.mdx b/pages/ja/studio/deploy-subgraph-studio.mdx index 2155d8fe8976..69b6786ebda4 100644 --- a/pages/ja/studio/deploy-subgraph-studio.mdx +++ b/pages/ja/studio/deploy-subgraph-studio.mdx @@ -29,7 +29,7 @@ npm install -g @graphprotocol/graph-cli Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. -## Initialize your Subgraph +## サブグラフの初期化 Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: From c0e832c99af8cb97ea0a5c425938b5e9b2dda8dd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 02:24:02 -0500 Subject: [PATCH 337/432] New translations distributed-systems.mdx (Arabic) --- pages/ar/developer/distributed-systems.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/developer/distributed-systems.mdx b/pages/ar/developer/distributed-systems.mdx index 894fcbe2e18b..96b3a27ec7ac 100644 --- a/pages/ar/developer/distributed-systems.mdx +++ b/pages/ar/developer/distributed-systems.mdx @@ -1,10 +1,10 @@ --- -title: Distributed Systems +title: الانظمة الموزعة --- -The Graph is a protocol implemented as a distributed system. +The Graph هو بروتوكول يتم تنفيذه كنظام موزع. -Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. +فشل الاتصالات. وصول الطلبات خارج الترتيب. أجهزة الكمبيوتر المختلفة ذات الساعات والحالات غير المتزامنة تعالج الطلبات ذات الصلة. الخوادم تعيد التشغيل. حدوث عمليات إعادة التنظيم بين الطلبات. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. From 6dec1327a838cd65f2efe0b06bbbe9844d85cf17 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 02:24:03 -0500 Subject: [PATCH 338/432] New translations graphql-api.mdx (Arabic) --- pages/ar/developer/graphql-api.mdx | 44 +++++++++++++++--------------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/pages/ar/developer/graphql-api.mdx b/pages/ar/developer/graphql-api.mdx index 6b2895103f0c..15ab979dacff 100644 --- a/pages/ar/developer/graphql-api.mdx +++ b/pages/ar/developer/graphql-api.mdx @@ -178,7 +178,7 @@ _not_ends_with } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +سيعود هذا الاستعلام بكيانات ` Challenge ` وكيانات ` Application ` المرتبطة بها ، كما كانت موجودة مباشرة بعد معالجة رقم الكتلة 8،000،000. #### مثال @@ -194,26 +194,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +سيعود هذا الاستعلام بكيانات ` Challenge ` وكيانات ` Application ` المرتبطة بها ، كما كانت موجودة مباشرة بعد معالجة الكتلة باستخدام hash المحددة. -### Fulltext Search Queries +### استعلامات بحث النص الكامل -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. +حقول استعلام البحث عن نص كامل توفر API للبحث عن نص تعبيري يمكن إضافتها إلى مخطط الـ subgraph وتخصيصها. راجع [ تعريف حقول بحث النص الكامل ](/developer/create-subgraph-hosted#defining-fulltext-search-fields) لإضافة بحث نص كامل إلى الـ subgraph الخاص بك. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +استعلامات البحث عن النص الكامل لها حقل واحد مطلوب ، وهو ` text ` ، لتوفير عبارة البحث. تتوفر العديد من عوامل النص الكامل الخاصة لاستخدامها في حقل البحث ` text `. -Fulltext search operators: +عوامل تشغيل البحث عن النص الكامل: -| Symbol | Operator | الوصف | -| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| رمز | عامل التشغيل | الوصف | +| ----------- | ------------ | --------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة | +| | | `Or` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة | +| `<->` | `Follow by` | يحدد المسافة بين كلمتين. | +| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) | #### أمثلة -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +باستخدام العامل ` or` ، سيقوم الاستعلام هذا بتصفية blog الكيانات التي تحتوي على أشكال مختلفة من "anarchism" أو "crumpet" في حقول النص الكامل الخاصة بها. ```graphql { @@ -226,7 +226,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +العامل ` follow by ` يحدد الكلمات بمسافة محددة عن بعضها في مستندات النص-الكامل. الاستعلام التالي سيعيد جميع الـ blogs التي تحتوي على أشكال مختلفة من "decentralize" متبوعة بكلمة "philosophy" ```graphql { @@ -239,7 +239,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +اجمع بين عوامل تشغيل النص-الكامل لعمل فلترة أكثر تعقيدا. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". ```graphql { @@ -252,16 +252,16 @@ Combine fulltext operators to make more complex filters. With a pretext search o } ``` -## Schema +## المخطط -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +يتم تعريف مخطط مصدر البيانات الخاص بك - أي أنواع الكيانات والقيم والعلاقات المتاحة للاستعلام - من خلال [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +مخططات GraphQL تعرف عموما أنواع الجذر لـ `queries`, و `subscriptions` و`mutations`. The Graph يدعم فقط `queries`. يتم إنشاء نوع الجذر `Query` لـ subgraph تلقائيا من مخطط GraphQL المضمن في subgraph manifest الخاص بك. -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> ** ملاحظة: ** الـ API الخاصة بنا لا تعرض الـ mutations لأنه يُتوقع من المطورين إصدار إجراءات مباشرة لـblockchain الأساسي من تطبيقاتهم. -### Entities +### الكيانات -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +سيتم التعامل مع جميع أنواع GraphQL التي تحتوي على توجيهات `entity@ ` في مخططك على أنها كيانات ويجب أن تحتوي على حقل ` ID `. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> ** ملاحظة: ** في الوقت الحالي ، يجب أن تحتوي جميع الأنواع في مخططك على توجيه `entity@ `. في المستقبل ، سنتعامل مع الأنواع التي لا تحتوي على التوجيه `entity@ ` ككائنات، لكن هذا غير مدعوم حتى الآن. From 46e3dd58ce3d8f1fe0a4bc51433ea7bdc1b0ee4d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 03:28:22 -0500 Subject: [PATCH 339/432] New translations distributed-systems.mdx (Arabic) --- pages/ar/developer/distributed-systems.mdx | 46 +++++++++++----------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/pages/ar/developer/distributed-systems.mdx b/pages/ar/developer/distributed-systems.mdx index 96b3a27ec7ac..e647ca602f02 100644 --- a/pages/ar/developer/distributed-systems.mdx +++ b/pages/ar/developer/distributed-systems.mdx @@ -4,34 +4,34 @@ title: الانظمة الموزعة The Graph هو بروتوكول يتم تنفيذه كنظام موزع. -فشل الاتصالات. وصول الطلبات خارج الترتيب. أجهزة الكمبيوتر المختلفة ذات الساعات والحالات غير المتزامنة تعالج الطلبات ذات الصلة. الخوادم تعيد التشغيل. حدوث عمليات إعادة التنظيم بين الطلبات. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. +فشل الاتصالات. وصول الطلبات خارج الترتيب. أجهزة الكمبيوتر المختلفة ذات الساعات والحالات غير المتزامنة تعالج الطلبات ذات الصلة. الخوادم تعيد التشغيل. حدوث عمليات Re-orgs بين الطلبات. هذه المشاكل متأصلة في جميع الأنظمة الموزعة ولكنها تتفاقم في الأنظمة التي تعمل على نطاق عالمي. -Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. +ضع في اعتبارك هذا المثال لما قد يحدث إذا قام العميل بـ polls للمفهرس للحصول على أحدث البيانات أثناء re-org. -1. Indexer ingests block 8 -2. Request served to the client for block 8 -3. Indexer ingests block 9 -4. Indexer ingests block 10A -5. Request served to the client for block 10A -6. Indexer detects reorg to 10B and rolls back 10A -7. Request served to the client for block 9 -8. Indexer ingests block 10B -9. Indexer ingests block 11 -10. Request served to the client for block 11 +1. المفهرس يستوعب الكتلة 8 +2. تقديم الطلب للعميل للمجموعة 8 +3. يستوعب المفهرس الكتلة 9 +4. المفهرس يستوعب الكتلة 10A +5. تقديم الطلب للعميل للكتلة 10A +6. يكتشف المفهرس reorg لـ 10B ويسترجع 10A +7. تقديم الطلب للعميل للكتلة 9 +8. المفهرس يستوعب الكتلة 10B +9. المفهرس يستوعب الكتلة 11 +10. تقديم الطلب للعميل للكتلة 11 -From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. +من وجهة نظر المفهرس ، تسير الأمور إلى الأمام بشكل منطقي. الوقت يمضي قدما ، على الرغم من أننا اضطررنا إلى التراجع عن كتلة الـ uncle وتشغيل الكتلة وفقا للاتفاق. على طول الطريق ، يقدم المفهرس الطلبات باستخدام أحدث حالة يعرفها في ذلك الوقت. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +لكن من وجهة نظر العميل ، تبدو الأمور مشوشة. يلاحظ العميل أن الردود كانت للكتل 8 و 10 و 9 و 11 بهذا الترتيب. نسمي هذا مشكلة "تذبذب الكتلة". عندما يواجه العميل تذبذبا في الكتلة ، فقد تظهر البيانات متناقضة مع نفسها بمرور الوقت. يزداد الموقف سوءا عندما نعتبر أن المفهرسين لا يستوعبون جميع الكتل الأخيرة في وقت واحد ، وقد يتم توجيه طلباتك إلى عدة مفهرسين. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +تقع على عاتق العميل والخادم مسؤولية العمل معا لتوفير بيانات متسقة للمستخدم. يجب استخدام طرق مختلفة اعتمادا على الاتساق المطلوب حيث لا يوجد برنامج واحد مناسب لكل مشكلة. -Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. +الاستنتاج من خلال الآثار المترتبة على الأنظمة الموزعة أمر صعب ، لكن الإصلاح قد لا يكون كذلك! لقد أنشأنا APIs وأنماط لمساعدتك على تصفح بعض حالات الاستخدام الشائعة. توضح الأمثلة التالية هذه الأنماط ولكنها لا تزال تتجاهل التفاصيل التي يتطلبها كود الإنتاج (مثل معالجة الأخطاء والإلغاء) حتى لا يتم تشويش الأفكار الرئيسية. -## Polling for updated data +## Polling للبيانات المحدثة -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph يوفر `block: { number_gte: $minBlock }` API ، والتي تضمن أن تكون الاستجابة لكتلة واحدة تزيد أو تساوي `$minBlock`. إذا تم إجراء الطلب لـ `graph-node` instance ولم تتم مزامنة الكتلة الدنيا بعد ، فسيرجع `graph-node` بخطأ. إذا قام `graph-node` بمزامنة الكتلة الدنيا ، فسيتم تشغيل الاستجابة لأحدث كتلة. إذا تم تقديم الطلب إلى Edge & Node Gateway ، ستقوم الـ Gateway بفلترة المفهرسين الذين لم يقوموا بعد بمزامنة الكتلة الدنيا وتجعل الطلب لأحدث كتلة قام المفهرس بمزامنتها. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +يمكننا استخدام ` number_gte ` لضمان عدم عودة الوقت إلى الوراء عند عمل polling للبيانات في الحلقة. هنا مثال لذلك: ```javascript /// Updates the protocol.paused variable to the latest @@ -73,11 +73,11 @@ async function updateProtocolPaused() { } ``` -## Fetching a set of related items +## جلب مجموعة من العناصر المرتبطة -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +حالة أخرى هي جلب مجموعة كبيرة أو بشكل عام جلب العناصر المرتبطة عبر طلبات متعددة. على عكس حالة الـ polling (حيث كان التناسق المطلوب هو المضي قدما في الزمن) ، فإن الاتساق المطلوب هو لنقطة واحدة في الزمن. -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +هنا سوف نستخدم الوسيطة `block: { hash: $blockHash }` لتثبيت جميع نتائجنا في نفس الكتلة. ```javascript /// Gets a list of domain names from a single block using pagination @@ -129,4 +129,4 @@ async function getDomainNames() { } ``` -Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. +لاحظ أنه في حالة re-org ، سيحتاج العميل إلى إعادة المحاولة من الطلب الأول لتحديث hash الكتلة إلى كتلة non-uncle. From 14ab9757a0ebaea9ba7b31f4691fdbf182248cb7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 03:28:23 -0500 Subject: [PATCH 340/432] New translations publish-subgraph.mdx (Arabic) --- pages/ar/developer/publish-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/developer/publish-subgraph.mdx b/pages/ar/developer/publish-subgraph.mdx index 2f35f5eb1bae..4b702af69b9b 100644 --- a/pages/ar/developer/publish-subgraph.mdx +++ b/pages/ar/developer/publish-subgraph.mdx @@ -1,5 +1,5 @@ --- -title: Publish a Subgraph to the Decentralized Network +title: نشر Subgraph للشبكة اللامركزية --- Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. From a2fec3fe2a22b082f059d8c29c6158327b05887c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 03:28:24 -0500 Subject: [PATCH 341/432] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- .../hosted-service/deploy-subgraph-hosted.mdx | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index 30471b974dff..1fd03aa7ea01 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -32,23 +32,23 @@ title: 将子图部署到托管服务上 After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). -## Deploy a Subgraph on the Hosted Service +## 在托管服务上部署子图 -Deploying your subgraph will upload the subgraph files that you've built with `yarn build` to IPFS and tell the Graph Explorer to start indexing your subgraph using these files. +一旦部署您的子图,您使用`yarn build` 命令构建的子图文件将被上传到 IPFS,并告诉 Graph Explorer 开始使用这些文件索引您的子图。 -You deploy the subgraph by running `yarn deploy` +您可以通过运行 `yarn deploy`来部署子图。 -After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. +部署子图后,Graph Explorer将切换到显示子图的同步状态。 根据需要从历史以太坊区块中提取的数据量和事件数量的不同,从创世区块开始,同步操作可能需要几分钟到几个小时。 一旦 Graph节点从历史区块中提取了所有数据,子图状态就会切换到`Synced`。 当新的以太坊区块出现时,Graph节点将继续按照子图的要求检查这些区块的信息。 -## Redeploying a Subgraph +## 重新部署子图 -When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. +更改子图定义后,例如:修复实体映射中的一个问题,再次运行上面的 `yarn deploy` 命令可以部署新版本的子图。 子图的任何更新都需要Graph节点再次从创世块开始重新索引您的整个子图。 -If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. +如果您之前部署的子图仍处于`Syncing`状态,系统则会立即将其替换为新部署的版本。 如果之前部署的子图已经完全同步,Graph节点会将新部署的版本标记为`Pending Version`,在后台进行同步,只有在新版本同步完成后,才会用新的版本替换当前部署的版本。 这样做可以确保在新版本同步时您仍然有子图可以使用。 -### Deploying the subgraph to multiple Ethereum networks +### 将子图部署到多个以太坊网络 -In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +在某些情况下,您可能希望将相同的子图部署到多个以太坊网络,而无需复制其所有代码。 这样做的主要挑战是这些网络上的合约地址不同。 One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: @@ -116,11 +116,11 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -## Checking subgraph health +## 检查子图状态 -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +如果子图成功同步,这是表明它将运行良好的一个好的信号。 但是,链上的新事件可能会导致您的子图遇到未经测试的错误环境,或者由于性能或节点方面的问题而开始落后于链上数据。 -Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph 节点公开了一个 graphql 端点,您可以通过查询该端点来检查子图的状态。 在托管服务上,该端点的链接是 `https://api.thegraph.com/index-node/graphql`。 在本地节点上,默认情况下该端点在端口 `8030/graphql` 上可用。 该端点的完整数据模式可以在[此处](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql)找到。 这是一个检查子图当前版本状态的示例查询: ```graphql { @@ -147,14 +147,14 @@ Graph Node exposes a graphql endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. +这将为您提供 `chainHeadBlock`,您可以将其与子图上的 `latestBlock` 进行比较,以检查子图是否落后。 通过`synced`,可以了解子图是否与链上数据完全同步。 如果子图没有发生错误,`health` 将返回`healthy`,如果有一个错误导致子图的同步进度停止,那么 `health`将返回`failed` 。 在这种情况下,您可以检查 `fatalError` 字段以获取有关此错误的详细信息。 -## Subgraph archive policy +## 子图归档策略 -The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. +托管服务是一个免费的Graph节点索引器。 开发人员可以部署索引一系列网络的子图,这些网络将被索引,并可以通过 graphQL 进行查询。 -To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs which are inactive. +为了提高活跃子图的服务性能,托管服务将归档不活跃的子图。 -**A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 30 days.** +**如果一个子图在 45 天前部署到托管服务,并且在过去 30 天内收到 0 个查询,则将其定义为“不活跃”。** -Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. +如果开发人员的一个子图被标记为不活跃,并将 7 天后被删除,托管服务会通过电子邮件通知开发者。 如果他们希望“激活”他们的子图,他们可以通过在其子图的托管服务 graphQL playground中发起查询来实现。 如果再次需要使用这个子图,开发人员也可以随时重新部署存档的子图。 From 9e5fb51bc27da75625bd14505daf4266b647ef09 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 04:36:03 -0500 Subject: [PATCH 342/432] New translations publish-subgraph.mdx (Arabic) --- pages/ar/developer/publish-subgraph.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/ar/developer/publish-subgraph.mdx b/pages/ar/developer/publish-subgraph.mdx index 4b702af69b9b..2e84a70b9a1e 100644 --- a/pages/ar/developer/publish-subgraph.mdx +++ b/pages/ar/developer/publish-subgraph.mdx @@ -2,19 +2,19 @@ title: نشر Subgraph للشبكة اللامركزية --- -Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. +بمجرد أن الـ subgraph الخاص بك [قد تم نشره لـ Subgraph Studio](/studio/deploy-subgraph-studio) ، وقمت باختباره ، وأصبحت جاهزا لوضعه في الإنتاج ، يمكنك بعد ذلك نشره للشبكة اللامركزية. -Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. +يؤدي نشر Subgraph على الشبكة اللامركزية إلى الإتاحة [ للمنسقين ](/curating) لبدء التنسيق، و [ للمفهرسين](/indexing) لبدء الفهرسة. -For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). +للحصول على إرشادات حول كيفية نشر subgraph على الشبكة اللامركزية ، راجع [ هذا الفيديو ](https://youtu.be/HfDgC2oNnwo؟t=580). -### Networks +### الشبكات -The decentralized network currently supports both Rinkeby and Ethereum Mainnet. +تدعم الشبكة اللامركزية حاليا كلا من Rinkeby و Ethereum Mainnet. -### Publishing a subgraph +### نشر subgraph -Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). +يمكن نشر الـ Subgraphs على الشبكة اللامركزية مباشرة من Subgraph Studio dashboard بالنقر فوق الزر ** Publish **. بمجرد نشر الـ subgraph ، فإنه سيكون متاحا للعرض في [ Graph Explorer ](https://thegraph.com/explorer/). - Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. From 2891330ce637a71db10882eda8d813e64b6d2bf5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 04:36:05 -0500 Subject: [PATCH 343/432] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- .../hosted-service/deploy-subgraph-hosted.mdx | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index 1fd03aa7ea01..5fe5ccacae0e 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -20,17 +20,17 @@ title: 将子图部署到托管服务上 **子图名称** - 子图名称连同下面将要创建的子图帐户名称,将定义用于部署和 GraphQL 端点的`account-name/subgraph-name`样式名称。 _此字段以后无法更改。_ -**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ +**帐户** - 创建子图的帐户。 这可以是个人或组织的帐户。 _以后不能在帐户之间移动子图。_ -**Subtitle** - Text that will appear in subgraph cards. +**副标题** - 将出现在子图卡中的文本。 -**Description** - Description of the subgraph, visible on the subgraph details page. +**描述** - 子图的描述,在子图详细信息页面上可见。 -**GitHub URL** - Link to the subgraph repository on GitHub. +**GitHub URL** - 存储在GitHub 上的子图代码的链接。 -**Hide** - Switching this on hides the subgraph in the Graph Explorer. +**隐藏** - 打开此选项可隐藏Graph Explorer中的子图。 -After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). +保存新子图后,您会看到一个屏幕,其中包含有关如何安装 Graph CLI、如何为新子图生成脚手架以及如何部署子图的帮助信息。 前面两部分在[定义子图](/developer/define-subgraph-hosted)中进行了介绍。 ## 在托管服务上部署子图 @@ -48,9 +48,9 @@ After saving the new subgraph, you are shown a screen with help on how to instal ### 将子图部署到多个以太坊网络 -在某些情况下,您可能希望将相同的子图部署到多个以太坊网络,而无需复制其所有代码。 这样做的主要挑战是这些网络上的合约地址不同。 One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +在某些情况下,您可能希望将相同的子图部署到多个以太坊网络,而无需复制其所有代码。 这样做的主要挑战是这些网络上的合约地址不同。 允许参数化合约地址等配置的一种解决方案是使用 [Mustache](https://mustache.github.io/)或 [Handlebars](https://handlebarsjs.com/)等模板系统。 -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: +为了说明这种方法,我们假设使用不同的合约地址将子图部署到主网和 Ropsten上。 您可以定义两个配置文件,为每个网络提供相应的地址: ```json { @@ -59,7 +59,7 @@ To illustrate this approach, let's assume a subgraph should be deployed to mainn } ``` -and +和 ```json { @@ -68,7 +68,7 @@ and } ``` -Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: +除此之外,您可以用变量占位符 `{{network}}` 和 `{{address}}` 替换清单中的网络名称和地址,并将清单重命名为例如 `subgraph.template.yaml`: ```yaml # ... @@ -85,7 +85,7 @@ dataSources: kind: ethereum/events ``` -In order generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: +为了给每个网络生成清单,您可以向 `package.json` 添加两个附加命令,以及对 `mustache` 的依赖项: ```json { @@ -102,7 +102,7 @@ In order generate a manifest to either network, you could add two additional com } ``` -To deploy this subgraph for mainnet or Ropsten you would now simply run one of the two following commands: +要为主网或 Ropsten 部署此子图,您现在只需运行以下两个命令中的任意一个: ```sh # Mainnet: @@ -112,9 +112,9 @@ yarn prepare:mainnet && yarn deploy yarn prepare:ropsten && yarn deploy ``` -A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). +您可以在[这里](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759)找到一个工作示例。 -**Note:** This approach can also be applied more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. +**注意:** 这种方法也可以应用在更复杂的情况下,例如:需要替换的不仅仅是合约地址和网络名称,或者还需要从模板生成映射或 ABI。 ## 检查子图状态 From 7ed96cb0d8a17d602ca3ad9e0fb06759bfb09577 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 04:36:08 -0500 Subject: [PATCH 344/432] New translations near.mdx (Chinese Simplified) --- pages/zh/supported-networks/near.mdx | 52 ++++++++++++++-------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/pages/zh/supported-networks/near.mdx b/pages/zh/supported-networks/near.mdx index bff2f82364d9..e5980fba4e95 100644 --- a/pages/zh/supported-networks/near.mdx +++ b/pages/zh/supported-networks/near.mdx @@ -1,56 +1,56 @@ --- -title: Building Subgraphs on NEAR +title: 在 NEAR 上构建子图 --- -> NEAR support in Graph Node and on the Hosted Service is in beta: please contact near@thegraph.com with any questions about building NEAR subgraphs! +> Graph节点和托管服务中对NEAR 的支持目前处于测试阶段:任何有关构建 NEAR 子图的任何问题,请联系 near@thegraph.com! -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +本指南介绍了如何在[NEAR区块链](https://docs.near.org/)上构建索引智能合约的子图。 -## What is NEAR? +## NEAR是什么? -[NEAR](https://near.org/) is a smart contract platform for building decentralised applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. +[NEAR](https://near.org/) 是一个用于构建去中心化应用程序的智能合约平台。 请访问 [官方文档](https://docs.near.org/docs/concepts/new-to-near) 了解更多信息。 -## What are NEAR subgraphs? +## NEAR子图是什么? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +Graph 为开发人员提供了一种被称为子图的工具,利用这个工具,开发人员能够处理区块链事件,并通过 GraphQL API提供结果数据。 [Graph节点](https://github.com/graphprotocol/graph-node)现在能够处理 NEAR 事件,这意味着 NEAR 开发人员现在可以构建子图来索引他们的智能合约。 -Subgraphs are event-based, which means that they listen for and then process on-chain events. There are currently two types of handlers supported for NEAR subgraphs: +子图是基于事件的,这意味着子图可以侦听并处理链上事件。 NEAR 子图目前支持两种类型的处理程序: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- 区块处理器: 这些处理程序在每个新区块上运行 +- 收据处理器: 每次在指定帐户上一个消息被执行时运行。 -[From the NEAR documentation](https://docs.near.org/docs/concepts/transaction#receipt): +[NEAR 文档中](https://docs.near.org/docs/concepts/transaction#receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Receipt是系统中唯一可操作的对象。 当我们在 NEAR 平台上谈论“处理交易”时,这最终意味着在某个时候“应用收据”。 -## Building a NEAR Subgraph +## 构建NEAR子图 -`@graphprotocol/graph-cli` is a command line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli`是一个用于构建和部署子图的命令行工具。 -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` 是子图特定类型的库。 -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR子图开发需要`0.23.0`以上版本的`graph-cli`,以及 `0.23.0`以上版本的`graph-ts`。 -> Building a NEAR subgraph is very similar to building a subgraph which indexes Ethereum. +> 构建 NEAR 子图与构建索引以太坊的子图非常相似。 -There are three aspects of subgraph definition: +子图定义包括三个方面: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** 子图清单,定义感兴趣的数据源以及如何处理它们。 NEAR 是一种全新`类型`数据源。 -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). +**schema.graphql:** 一个模式文件,它定义为您的子图存储哪些数据,以及如何通过 GraphQL 查询它。 NEAR 子图的要求包含在 [现有文档](/developer/create-subgraph-hosted#the-graphql-schema)中。 -**AssemblyScript Mappings:** [AssemblyScript code](/developer/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. +**AssemblyScript 映射:**将事件数据转换为模式文件中定义的实体的[AssemblyScript 代码](/developer/assemblyscript-api)。 NEAR 支持引入了 NEAR 特定的数据类型和新的JSON 解析功能。 -During subgraph development there are two key commands: +在子图开发过程中,有两个关键命令: ```bash -$ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph codegen # 从清单中标识的模式文件生成类型 +$ graph build # 从 AssemblyScript 文件生成 Web Assembly,并在 /build 文件夹中准备所有子图文件 ``` -### Subgraph Manifest Definition +### 子图清单定义 -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:: +子图清单(`subgraph.yaml`)标识子图的数据源、感兴趣的触发器以及响应这些触发器而运行的函数。 以下是一个NEAR 的子图清单的例子: ```yaml specVersion: 0.0.2 From ef79a2a04165f77e0bba717f11dd4ef9e0b88a53 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 06:31:07 -0500 Subject: [PATCH 345/432] New translations publish-subgraph.mdx (Arabic) --- pages/ar/developer/publish-subgraph.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/pages/ar/developer/publish-subgraph.mdx b/pages/ar/developer/publish-subgraph.mdx index 2e84a70b9a1e..3d51eccafeed 100644 --- a/pages/ar/developer/publish-subgraph.mdx +++ b/pages/ar/developer/publish-subgraph.mdx @@ -16,12 +16,12 @@ title: نشر Subgraph للشبكة اللامركزية يمكن نشر الـ Subgraphs على الشبكة اللامركزية مباشرة من Subgraph Studio dashboard بالنقر فوق الزر ** Publish **. بمجرد نشر الـ subgraph ، فإنه سيكون متاحا للعرض في [ Graph Explorer ](https://thegraph.com/explorer/). -- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. +- يمكن لـ Subgraphs المنشور على Rinkeby فهرسة البيانات والاستعلام عنها من شبكة Rinkeby أو Ethereum Mainnet. -- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. +- يمكن لـ Subgraphs المنشور على Ethereum Mainnet فقط فهرسة البيانات والاستعلام عنها من Ethereum Mainnet ، مما يعني أنه لا يمكنك نشر الـ subgraphs على الشبكة اللامركزية الرئيسية التي تقوم بفهرسة بيانات testnet والاستعلام عنها. -- When publishing a new version for an existing subgraph the same rules apply as above. +- عند نشر نسخة جديدة لـ subgraph حالي ، تنطبق عليه نفس القواعد أعلاه. -### Updating metadata for a published subgraph +### تحديث بيانات الـ subgraph المنشور -Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. +بمجرد نشر الـ subgraph الخاص بك على الشبكة اللامركزية ، يمكنك تعديل البيانات الوصفية في أي وقت عن طريق إجراء التحديث في Subgraph Studio dashboard لـ subgraph. بعد حفظ التغييرات ونشر تحديثاتك على الشبكة ، ستنعكس في the Graph Explorer. لن يؤدي هذا إلى إنشاء إصدار جديد ، لأن النشر الخاص بك لم يتغير. From dbe4b96295ecc468788badb606b06c69f610ba3c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 06:31:09 -0500 Subject: [PATCH 346/432] New translations query-the-graph.mdx (Arabic) --- pages/ar/developer/query-the-graph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/developer/query-the-graph.mdx b/pages/ar/developer/query-the-graph.mdx index 9126d75b5896..7cb404a200ce 100644 --- a/pages/ar/developer/query-the-graph.mdx +++ b/pages/ar/developer/query-the-graph.mdx @@ -1,5 +1,5 @@ --- -title: Query The Graph +title: الاستعلام عن The Graph --- With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. From 991ecc421406f8e355d8fc88a0764d7d9a79ebea Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 07:57:43 -0500 Subject: [PATCH 347/432] New translations deprecating-a-subgraph.mdx (Arabic) --- pages/ar/developer/deprecating-a-subgraph.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ar/developer/deprecating-a-subgraph.mdx b/pages/ar/developer/deprecating-a-subgraph.mdx index f8966e025c13..2d83064709da 100644 --- a/pages/ar/developer/deprecating-a-subgraph.mdx +++ b/pages/ar/developer/deprecating-a-subgraph.mdx @@ -1,17 +1,17 @@ --- -title: Deprecating a Subgraph +title: إهمال Subgraph --- -So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: +إن كنت ترغب في إهمال الـ subgraph الخاص بك في The Graph Explorer. فأنت في المكان المناسب! اتبع الخطوات أدناه: -1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) -2. Call 'deprecateSubgraph' with your own address as the first parameter -3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. -4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` -5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: +1. قم بزيارة عنوان العقد [ هنا ](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) +2. استدعِ "devecateSubgraph" بعنوانك الخاص كأول بارامتر +3. في حقل "subgraphNumber" ، قم بإدراج 0 إذا كان أول subgraph تنشره ، 1 إذا كان الثاني ، 2 إذا كان الثالث ، إلخ. +4. يمكن العثور على مدخلات # 2 و # 3 في `` الخاص بك والذي يتكون من `{graphAccount}-{subgraphNumber}`. فمثلا، [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID هو `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`,وهو مزيج من `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` و `subgraphNumber` = `<0>` +5. هاهو! لن يظهر الـ subgraph بعد الآن في عمليات البحث في The Graph Explorer. يرجى ملاحظة ما يلي: -- Curators will not be able to signal on the subgraph anymore -- Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price -- Deprecated subgraphs will be indicated with an error message. +- لن يتمكن المنسقون من الإشارة على الـ subgraph بعد الآن +- سيتمكن المنشقون الذين قد أشاروا شابقا على الـ subgraph من سحب إشاراتهم بمتوسط سعر السهم +- ستتم تحديد الـ subgraphs المهملة برسالة خطأ. -If you interacted with the now deprecated subgraph, you'll be able to find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab respectively. +إذا تفاعلت مع الـ subgraph المهمل ، فستتمكن من العثور عليه في ملف تعريف المستخدم الخاص بك ضمن علامة التبويب "Subgraphs" أو "Indexing" أو "Curating" على التوالي. From 0a53936989f1c92cd6a416908fa11ac764b4290d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 07:57:45 -0500 Subject: [PATCH 348/432] New translations query-the-graph.mdx (Arabic) --- pages/ar/developer/query-the-graph.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/ar/developer/query-the-graph.mdx b/pages/ar/developer/query-the-graph.mdx index 7cb404a200ce..776fbcb6bed1 100644 --- a/pages/ar/developer/query-the-graph.mdx +++ b/pages/ar/developer/query-the-graph.mdx @@ -2,13 +2,13 @@ title: الاستعلام عن The Graph --- -With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +بالـ subgraph المنشور ، قم بزيارة [ Graph Explorer ](https://thegraph.com/explorer) لفتح واجهة [ GraphiQL ](https://github.com/graphql/graphiql) حيث يمكنك استكشاف GraphQL API المنشورة لـ subgraph عن طريق إصدار الاستعلامات وعرض المخطط. -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +تم توفير المثال أدناه ، ولكن يرجى الاطلاع على [Query API](/developer/graphql-api) للحصول على مرجع كامل حول كيفية الاستعلام عن كيانات الـ subgraph. #### مثال -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +يسرد هذا الاستعلام جميع العدادات التي أنشأها الـ mapping الخاص بنا. نظرا لأننا أنشأنا واحدا فقط ، فستحتوي النتيجة فقط على `default-counter`: ```graphql { @@ -19,14 +19,14 @@ This query lists all the counters our mapping has created. Since we only create } ``` -## Using The Graph Explorer +## استخدام The Graph Explorer -Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. +يحتوي كل subgraph منشور على Graph Explorer اللامركزي على عنوان URL فريد للاستعلام والذي يمكنك العثور عليه بالانتقال إلى صفحة تفاصيل الـ subgraph والنقر على "Query" في الزاوية اليمنى العليا. سيؤدي هذا إلى فتح نافذة جانبية والتي تمنحك عنوان URL فريد للاستعلام لـ subgraph بالإضافة إلى بعض الإرشادات حول كيفية الاستعلام عنه. -![Query Subgraph Pane](/img/query-subgraph-pane.png) +![نافذة الاستعلام عن Subgraph](/img/query-subgraph-pane.png) -As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). +كما يمكنك أن تلاحظ ، أنه يجب أن يستخدم عنوان الاستعلام URL مفتاح API فريد. يمكنك إنشاء وإدارة مفاتيح API الخاصة بك في [ Subgraph Studio ](https://thegraph.com/studio) في قسم "API Keys". تعرف على المزيد حول كيفية استخدام Subgraph Studio [ هنا ](/studio/subgraph-studio). -Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). +سيؤدي الاستعلام عن الـ subgraphs باستخدام مفاتيح API إلى إنشاء رسوم الاستعلام التي سيتم دفعها كـ GRT. يمكنك معرفة المزيد حول الفوترة [ هنا ](/studio/billing). -You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. +يمكنك أيضا استخدام GraphQL playground في علامة التبويب "Playground" للاستعلام عن subgraph داخل The Graph Explorer. From 02f48c9ba16722753648f8f89d22d07bc7fbd34a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 07:57:46 -0500 Subject: [PATCH 349/432] New translations query-hosted-service.mdx (Arabic) --- pages/ar/hosted-service/query-hosted-service.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ar/hosted-service/query-hosted-service.mdx b/pages/ar/hosted-service/query-hosted-service.mdx index 2eff0c13a84c..fd7de3b535a2 100644 --- a/pages/ar/hosted-service/query-hosted-service.mdx +++ b/pages/ar/hosted-service/query-hosted-service.mdx @@ -4,11 +4,11 @@ title: Query the Hosted Service With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +تم توفير المثال أدناه ، ولكن يرجى الاطلاع على [Query API](/developer/graphql-api) للحصول على مرجع كامل حول كيفية الاستعلام عن كيانات الـ subgraph. #### مثال -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +يسرد هذا الاستعلام جميع العدادات التي أنشأها الـ mapping الخاص بنا. نظرا لأننا أنشأنا واحدا فقط ، فستحتوي النتيجة فقط على `default-counter`: ```graphql { From 9f252fb754d3f48812ce62b1bf556c238a4d5389 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 09:05:20 -0500 Subject: [PATCH 350/432] New translations assemblyscript-api.mdx (Korean) --- pages/ko/developer/assemblyscript-api.mdx | 102 +++++++++++----------- 1 file changed, 51 insertions(+), 51 deletions(-) diff --git a/pages/ko/developer/assemblyscript-api.mdx b/pages/ko/developer/assemblyscript-api.mdx index c1725f1a8942..dbe7e798f68a 100644 --- a/pages/ko/developer/assemblyscript-api.mdx +++ b/pages/ko/developer/assemblyscript-api.mdx @@ -2,108 +2,108 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) +> 참고: 만약 `graph-cli`/`graph-ts` 버전 `0.22.0` 이전의 서브그래프를 생성하는 경우, 이전 버젼의 AssemblyScript를 사용중인 경우, [`Migration Guide`](/developer/assemblyscript-migration-guide)를 참고하시길 권장드립니다. -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +이 페이지는 서브그래프 매핑을 작성할 때 사용할 수 있는 내장 API를 설명합니다. 다음 두 가지 종류의 API를 즉시 사용할 수 있습니다 : -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) 그리고 +- `graph codegen`에 의해 서브그래프 파일들에서 생성된 코드 -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +[AssemblyScript](https://github.com/AssemblyScript/assemblyscript)와 호환되는 한 다른 라이브러리들을 의존성(dependencies)으로서 추가할 수도 있습니다. 이것은 언어 매핑이 작성되기 때문에 [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) 위키는 언어 및 표준 라이브러리 기능과 관련한 좋은 소스입니다. -## Installation +## 설치 -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: +[`graph init`](/developer/create-subgraph-hosted)로 생성된 서브그래프는 미리 구성된 의존성들을 함께 동반합니다. 이러한 의존성들을 설치하려면 다음 명령 중 하나를 실행해야 합니다. ```sh yarn install # Yarn npm install # NPM ``` -If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: +서브그래프가 처음부터 만들어진 경우 다음 두 명령 중 하나가 의존성으로서 그래프 타입스크립트 라이브러리를 설치할 것입니다. ```sh yarn add --dev @graphprotocol/graph-ts # Yarn npm install --save-dev @graphprotocol/graph-ts # NPM ``` -## API Reference +## API 참조 -The `@graphprotocol/graph-ts` library provides the following APIs: +`@graphprotocol/graph-ts` 라이브러리가 다음과 같은 API들을 제공합니다. -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and the Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. -- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. +- 이더리움 스마트 컨트렉트, 이벤트, 블록, 트랜젝션, 그리고 이더리움 벨류들과 작업하기 위한 `ethereum` API +- 더그래프 노드 스토어에서 엔티티를 로드하고 저장하기 위한 `store` API +- 더그래프 노드 출력 및 그래프 탐색기에 메세지를 기록하는 `log` API +- IPFS로부터 파일들을 로드하기 위한 `ipfs` API +- JSON 데이터를 구문 분석하는 `json` API +- 암호화 기능을 사용하기 위한 `crypto` API +- Ethereum, JSON, GraphQL 및 AssemblyScript와 같은 다양한 유형 시스템 간의 변환을 위한 저수준 프리미티브 -### Versions +### 버전 -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. +서브그래프 매니페스트의 `apiVersion`은 주어진 서브그래프에 대해 그래프 노드가 실행하는 매핑 API 버전을 지정합니다. 현재 맵핑 API 버전은 0.0.6 입니다. -| Version | Release notes | -|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| 버전 | 릴리스 노트 | +|:-----:| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | 이더리움 트랜잭션 개체에 `nonce` 필드를 추가했습니다.
`baseFeePerGas`가 이더리움 블록 개체에 추가되었습니다. | +| 0.0.5 | AssemblyScript를 버전 0.19.10으로 업그레이드했습니다(변경 내용 깨짐 포함. [`Migration Guide`](/developer/assemblyscript-migration-guide) 참조)
`ethereum.transaction.gasUsed`의 이름이 `ethereum.transaction.gasLimit`로 변경되었습니다. | +| 0.0.4 | Ethereum SmartContractCall 개체에 `functionSignature` 필드를 추가했습니다. | +| 0.0.3 | Ethereum Call 개체에 `from` 필드를 추가했습니다.
`etherem.call.address`의 이름이 `ethereum.call.to`로 변경되었습니다. | +| 0.0.2 | Ethereum Transaction 개체에 `input` 필드를 추가했습니다. | -### Built-in Types +### 기본 제공 유형 -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). +AssemblyScript에 내장된 기본 유형에 대한 설명서는 [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types)에서 확인할 수 있습니다. -The following additional types are provided by `@graphprotocol/graph-ts`. +다음의 추가적인 유형들이 `@graphprotocol/graph-ts`에 의해 제공됩니다. #### ByteArray ```typescript -import { ByteArray } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'로 부터 { ByteArray }를 입력합니다. ``` -`ByteArray` represents an array of `u8`. +`ByteArray`가 `u8`의 배열을 나타냅니다. _Construction_ -- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromI32(x: i32): ByteArray` - `x`를 바이트로 분해합니다. +- `fromHexString(hex: string): ByteArray` - 입력 길이는 반드시 짝수여야 합니다. `0x` 접두사는 선택사항입니다. -_Type conversions_ +_유형 변환_ -- `toHexString(): string` - Converts to a hex string prefixed with `0x`. -- `toString(): string` - Interprets the bytes as a UTF-8 string. -- `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- `toHexString(): string` - 접두사가 `0x`인 16진 문자열로 변환합니다. +- `toString(): string` - 바이트를 UTF-8 문자열로 해석합니다. +- `toBase58(): string` - 바이트를 base58 문자열로 인코딩합니다. +- `toU32(): u32` - 바이트를 little-endian `u32`로 해석합니다. 오버플로우의 경우에는 Throws 합니다. +- `toI32(): i32` - 바이트 배열을 little-endian `i32`로 해석합니다. 오버플로우의 경우에는 Throws 합니다. -_Operators_ +_연산자_ -- `equals(y: ByteArray): bool` – can be written as `x == y`. +- `equals(y: ByteArray): bool` – `x == y`로 쓸 수 있습니다 #### BigDecimal ```typescript -import { BigDecimal } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'로 부터 { BigDecimal }을 입력합니다. ``` -`BigDecimal` is used to represent arbitrary precision decimals. +`BigDecimal`은 임의의 정밀도 소수를 나타내는 데 사용됩니다. _Construction_ -- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. -- `static fromString(s: string): BigDecimal` – parses from a decimal string. +- `constructor(bigInt: BigInt)` – `BigInt`로 부터 `BigDecimal`을 생성합니다. +- `static fromString(s: string): BigDecimal` – 10진수 문자열에서 구문 분석을 수행합니다. -_Type conversions_ +_유형 변환_ -- `toString(): string` – prints to a decimal string. +- `toString(): string` – 10진수 문자열로 인쇄합니다. _Math_ -- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. +- `plus(y: BigDecimal): BigDecimal` – `x + y`로 쓸 수 있습니다. +- `minus(y: BigDecimal): BigDecimal` – `x - y`로 쓸 수 있습니다. +- `times(y: BigDecimal): BigDecimal` – `x * y`로 쓸 수 있습니다. - `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. - `equals(y: BigDecimal): bool` – can be written as `x == y`. - `notEqual(y: BigDecimal): bool` – can be written as `x != y`. @@ -130,7 +130,7 @@ _Construction_ - `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. - `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. - _Type conversions_ + _유형 변환_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. - `x.toString(): string` – turns `BigInt` into a decimal number string. From 3a99342ca47c9a1f64c5d37191efcbaeda326fc1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 09:05:24 -0500 Subject: [PATCH 351/432] New translations define-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/define-subgraph-hosted.mdx | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/zh/developer/define-subgraph-hosted.mdx b/pages/zh/developer/define-subgraph-hosted.mdx index 92bf5bd8cd2f..17484f0deb7a 100644 --- a/pages/zh/developer/define-subgraph-hosted.mdx +++ b/pages/zh/developer/define-subgraph-hosted.mdx @@ -1,34 +1,34 @@ --- -title: Define a Subgraph +title: 定义子图 --- -A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. +子图定义了Graph从以太坊索引哪些数据,以及如何存储这些数据。 子图一旦部署,就成为区块链数据全局图的一部分。 -![Define a Subgraph](/img/define-subgraph.png) +![定义子图](/img/define-subgraph.png) -The subgraph definition consists of a few files: +子图定义由几个文件组成: -- `subgraph.yaml`: a YAML file containing the subgraph manifest +- `subgraph.yaml`: 包含子图清单的 YAML 文件 -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +- `schema.graphql`: 一个 GraphQL 模式文件,它定义了为您的子图存储哪些数据,以及如何通过 GraphQL 查询这些数据 -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +- `AssemblyScript映射`: 将事件数据转换为模式中定义的实体(例如本教程中的`mapping.ts`)的 [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) 代码 -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) which you will need to build and deploy a subgraph. +在详细了解清单文件的内容之前,您需要安装[Graph CLI](https://github.com/graphprotocol/graph-cli),以构建和部署子图。 -## Install the Graph CLI +## 安装Graph CLI -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +Graph CLI是使用 JavaScript 编写的,您需要安装`yarn`或 `npm`才能使用它;以下教程中假设您已经安装了yarn。 -Once you have `yarn`, install the Graph CLI by running +一旦您安装了`yarn`,可以通过运行以下命令安装 Graph CLI -**Install with yarn:** +**使用yarn安装:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +**使用npm安装:** ```bash npm install -g @graphprotocol/graph-cli From e99d493474b5e392574bc2a7a93e5af540551231 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 09:05:26 -0500 Subject: [PATCH 352/432] New translations quick-start.mdx (Chinese Simplified) --- pages/zh/developer/quick-start.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/zh/developer/quick-start.mdx b/pages/zh/developer/quick-start.mdx index 6893d424ddc2..5c07399604fd 100644 --- a/pages/zh/developer/quick-start.mdx +++ b/pages/zh/developer/quick-start.mdx @@ -9,7 +9,7 @@ This guide will quickly take you through how to initialize, create, and deploy y ## Subgraph Studio -### 1. Install the Graph CLI +### 1. 安装Graph CLI The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. @@ -115,7 +115,7 @@ You can now query your subgraph by following [these instructions](/developer/que ## Hosted Service -### 1. Install the Graph CLI +### 1. 安装Graph CLI "The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. From 0796fcb0989a09630bd59b40551c0321f66220fb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 10:01:18 -0500 Subject: [PATCH 353/432] New translations assemblyscript-api.mdx (Korean) --- pages/ko/developer/assemblyscript-api.mdx | 40 +++++++++++------------ 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/pages/ko/developer/assemblyscript-api.mdx b/pages/ko/developer/assemblyscript-api.mdx index dbe7e798f68a..2f0512062719 100644 --- a/pages/ko/developer/assemblyscript-api.mdx +++ b/pages/ko/developer/assemblyscript-api.mdx @@ -104,30 +104,30 @@ _Math_ - `plus(y: BigDecimal): BigDecimal` – `x + y`로 쓸 수 있습니다. - `minus(y: BigDecimal): BigDecimal` – `x - y`로 쓸 수 있습니다. - `times(y: BigDecimal): BigDecimal` – `x * y`로 쓸 수 있습니다. -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. -- `le(y: BigDecimal): bool` – can be written as `x <= y`. -- `gt(y: BigDecimal): bool` – can be written as `x > y`. -- `ge(y: BigDecimal): bool` – can be written as `x >= y`. -- `neg(): BigDecimal` - can be written as `-x`. +- `div(y: BigDecimal): BigDecimal` – `x / y`로 쓸 수 있습니다. +- `equals(y: BigDecimal): bool` – `x == y`로 쓸 수 있습니다. +- `notEqual(y: BigDecimal): bool` – `x != y`로 쓸 수 있습니다. +- `lt(y: BigDecimal): bool` – `x < y`로 쓸 수 있습니다. +- `le(y: BigDecimal): bool` – `x <= y`로 쓸 수 있습니다. +- `gt(y: BigDecimal): bool` – `x > y`로 쓸 수 있습니다. +- `ge(y: BigDecimal): bool` – `x >= y`로 쓸 수 있습니다. +- `neg(): BigDecimal` - `-x`로 쓸 수 있습니다. #### BigInt ```typescript -import { BigInt } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'로 부터 { BigInt }를 입력합니다. ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +`BigInt`는 큰 정수를 나타내는 데 사용됩니다. 여기에는 `uint32` ~ `uint256` 및 `int64` ~ `int256`값이 포함됩니다. `int32`, `uint24` 혹은 `int8`과 같은 `uint32` 이하는 전부 `i32`로 표시됩니다. -The `BigInt` class has the following API: +`BigInt` 클래스에는 다음의 API가 있습니다: _Construction_ -- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromI32(x: i32): BigInt` – `i32`로 부터 `BigInt`를 생성합니다. +- `BigInt.fromString(s: string): BigInt`– 문자열로부터 `BigInt`를 구문 분석합니다. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – `bytes`를 부호 없는 little-endian 정수로 해석합니다. If your input is big-endian, call `.reverse()` first. - `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. _유형 변환_ @@ -646,9 +646,9 @@ When the type of a value is certain, it can be converted to a [built-in type](#b - `value.toF64(): f64` - `value.toBigInt(): BigInt` - `value.toString(): string` -- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) +- `value.toArray(): Array` - (이후 `JSONValue`를 상기 5개 방법 중 하나로 변환합니다.) -### Type Conversions Reference +### 유형 변환 참조 | Source(s) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | @@ -688,17 +688,17 @@ When the type of a value is certain, it can be converted to a [built-in type](#b | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### Data Source Metadata +### 데이터 소스 메타데이터 -You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: +`dataSource` 네임스페이스를 통해 핸들러를 호출한 데이터 소스의 계약 주소, 네트워크 및 컨텍스트를 검사할 수 있습니다 - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### Entity and DataSourceContext +### 엔티티 및 Entity and DataSourceContext -The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: +기본 `Entity` 클래스 및 child `DataSourceContext`는 필드를 동적으로 설정하고 필드를 가져오는 도우미가 있습니다. - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From e32cf7bd1e475916c18f61a7f5298e17ba4ed503 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 11:17:11 -0500 Subject: [PATCH 354/432] New translations what-is-hosted-service.mdx (Spanish) --- .../hosted-service/what-is-hosted-service.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/es/hosted-service/what-is-hosted-service.mdx b/pages/es/hosted-service/what-is-hosted-service.mdx index 6e7ff075b31f..15894df53eda 100644 --- a/pages/es/hosted-service/what-is-hosted-service.mdx +++ b/pages/es/hosted-service/what-is-hosted-service.mdx @@ -1,20 +1,20 @@ --- -title: What is the Hosted Service? +title: '¿Qué es el Servicio Alojado?' --- -We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) +Esta sección te guiará a través del despliegue de un subgrafo en el Servicio Alojado, también conocido como [Servicio Alojado.](https://thegraph.com/hosted-service/) Como recordatorio, el Servicio Alojado no se cerrará pronto. El Servicio Alojado desaparecerá gradualmente cuando alcancemos la paridad de características con la red descentralizada. Tus subgrafos desplegados en el Servicio Alojado siguen disponibles [aquí.](https://thegraph.com/hosted-service/) -If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. +Si no tienes una cuenta en el Servicio Alojado, puedes registrarte con tu cuenta de Github. Una vez que te autentiques, puedes empezar a crear subgrafos a través de la interfaz de usuario y desplegarlos desde tu terminal. Graph Node admite varias redes de prueba de Ethereum (Rinkeby, Ropsten, Kovan) además de la red principal. ## Crear un Subgrafo -First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` +Primero sigue las instrucciones [aquí](/developer/define-subgraph-hosted) para instalar the Graph CLI. Crea un subgrafo pasando `graph init --product hosted service` -### From an Existing Contract +### De un Contrato Existente -If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from this contract can be a good way to get started on the Hosted Service. +Si ya tienes un contrato inteligente desplegado en la red principal de Ethereum o en una de las redes de prueba, el arranque de un nuevo subgrafo a partir de este contrato puede ser una buena manera de empezar a utilizar el Servicio Alojado. -You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). +Puedes utilizar este comando para crear un subgrafo que indexe todos los eventos de un contrato existente. Esto intentará obtener el contrato ABI de [Etherscan](https://etherscan.io/). ```sh graph init \ @@ -23,14 +23,14 @@ graph init \ / [] ``` -Additionally, you can use the following optional arguments. If the ABI cannot be fetched from Etherscan, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form. +Además, puedes utilizar los siguientes argumentos opcionales. Si la ABI no puede ser obtenida de Etherscan, vuelve a solicitar una ruta de archivo local. Si falta algún argumento opcional en el comando, éste te lleva a través de un formulario interactivo. ```sh --network \ --abi \ ``` -The `` in this case is your github user or organization name, `` is the name for your subgraph, and `` is the optional name of the directory where graph init will put the example subgraph manifest. The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** +El ``en este caso es tu nombre de usuario u organización de github, `` es el nombre para tu subgrafo, y `` es el nombre opcional del directorio donde graph init pondrá el manifiesto del subgrafo de ejemplo. El `` es la dirección de tu contrato existente. `` es el nombre de la red Ethereum en la que está activo el contrato. `` es una ruta local a un archivo ABI del contrato. **Tanto --network como --abi son opcionales** ### From an Example Subgraph From 9e25eb78124771f43612aa043593dd77a0aa9a7f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:09:34 -0500 Subject: [PATCH 355/432] New translations create-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/create-subgraph-hosted.mdx | 196 +++++++++++------- 1 file changed, 121 insertions(+), 75 deletions(-) diff --git a/pages/zh/developer/create-subgraph-hosted.mdx b/pages/zh/developer/create-subgraph-hosted.mdx index 6b235e379634..86b0d3df18d8 100644 --- a/pages/zh/developer/create-subgraph-hosted.mdx +++ b/pages/zh/developer/create-subgraph-hosted.mdx @@ -2,9 +2,9 @@ title: Create a Subgraph --- -Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. +Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. -The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. +The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. ## Supported Networks @@ -44,7 +44,7 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: - `aurora` - `aurora-testnet` -The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. +The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. Remember that you will **not be able** to publish a subgraph that indexes a non-mainnet network to the decentralized Graph Network in [Subgraph Studio](/studio/subgraph-studio). @@ -65,17 +65,17 @@ The `` is the ID of your subgraph in Subgraph Studio, it can be f ## From An Example Subgraph -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: The following command does this: ``` graph init --studio ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. ## The Subgraph Manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). For the example subgraph, `subgraph.yaml` is: @@ -120,17 +120,17 @@ dataSources: The important entries to update for the manifest are: -- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. +- `description`: a human-readable description of what the subgraph is. `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. This is also displayed by the Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. The address is optional; omitting it allows to index matching events from all contracts. -- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. In most cases we suggest using the block in which the contract was created. -- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. The schema for each entity is defined in the the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. @@ -138,9 +138,9 @@ The important entries to update for the manifest are: - `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single subgraph can index data from multiple smart contracts. A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. The triggers for a data source within a block are ordered using the following process: @@ -152,21 +152,21 @@ These ordering rules are subject to change. ### Getting The ABIs -The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: +The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. - If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or using solc to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. Make sure you have the right ABI, otherwise running your subgraph will fail. ## The GraphQL Schema -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. ## Defining Entities Before defining entities, it is important to take a step back and think about how your data is structured and linked. All queries will be made against the data model defined in the subgraph schema and the entities indexed by the subgraph. Because of this, it is good to define the subgraph schema in a way that matches the needs of your dapp. It may be useful to imagine entities as "objects containing data", rather than as events or functions. -With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. +With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. Each type that should be an entity is required to be annotated with an `@entity` directive. ### Good Example @@ -184,7 +184,7 @@ type Gravatar @entity { ### Bad Example -The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. +The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. It is not recommended to map events or function calls to entities 1:1. ```graphql type GravatarAccepted @entity { @@ -199,18 +199,29 @@ type GravatarDeclined @entity { owner: Bytes displayName: String imageUrl: String +} + type Gravatar @entity { + id: ID! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} + owner: Bytes + displayName: String + imageUrl: String } ``` ### Optional and Required Fields -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. +Each entity must have an `id` field, which is of type `ID!` (string). Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. ### Built-In Scalar Types @@ -218,19 +229,19 @@ Each entity must have an `id` field, which is of type `ID!` (string). The `id` f We support the following scalars in our GraphQL API: -| Type | Description | -| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Type | Description | +| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums -You can also create enums within a schema. Enums have the following syntax: +You can also create enums within a schema. Enums have the following syntax: Enums have the following syntax: ```graphql enum TokenStatus { @@ -240,13 +251,13 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). #### Entity Relationships -An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. Relationships are defined on entities just like any other field except that the type specified is that of another entity. @@ -256,6 +267,8 @@ Define a `Transaction` entity type with an optional one-to-one relationship with ```graphql type Transaction @entity { + id: ID! + type Transaction @entity { id: ID! transactionReceipt: TransactionReceipt } @@ -263,6 +276,8 @@ type Transaction @entity { type TransactionReceipt @entity { id: ID! transaction: Transaction +} + transaction: Transaction } ``` @@ -271,6 +286,8 @@ type TransactionReceipt @entity { Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: ```graphql +type Token @entity { + id: ID! type Token @entity { id: ID! } @@ -279,6 +296,9 @@ type TokenBalance @entity { id: ID! amount: Int! token: Token! +} + amount: Int! + token: Token! } ``` @@ -286,7 +306,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -294,6 +314,8 @@ We can make the balances for a token accessible from the token by deriving a `to ```graphql type Token @entity { + id: ID! + tokenBalances: [TokenBalance!]! type Token @entity { id: ID! tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") } @@ -302,16 +324,19 @@ type TokenBalance @entity { id: ID! amount: Int! token: Token! +} + amount: Int! + token: Token! } ``` #### Many-To-Many Relationships -For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. #### Example -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. ```graphql type Organization @entity { @@ -339,11 +364,17 @@ type Organization @entity { type User @entity { id: ID! name: String! - organizations: [UserOrganization!] @derivedFrom(field: "organization") + organizations: [UserOrganization!] type Organization @entity { + id: ID! + name: String! + members: [User!]! } -type UserOrganization @entity { - id: ID! # Set to `${user.id}-${organization.id}` +type User @entity { + id: ID! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} # Set to `${user.id}-${organization.id}` user: User! organization: Organization! } @@ -368,21 +399,23 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: This is illustrated in the example below: ```graphql type MyFirstEntity @entity { "unique identifier and primary key of the entity" id: ID! address: Bytes! +} + address: Bytes! } ``` ## Defining Fulltext Search Fields -Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. +Fulltext search queries filter and rank entities based on a text search input. Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. -A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. @@ -404,10 +437,18 @@ type Band @entity { labels: [Label!]! discography: [Album!]! members: [Musician!]! +} + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. ```graphql query { @@ -424,7 +465,7 @@ query { ### Languages supported -Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". Supported language dictionaries: @@ -458,9 +499,9 @@ Supported algorithms for ordering results: ## Writing Mappings -The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. +The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. -For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: @@ -489,19 +530,19 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. ### Recommended IDs for Creating New Entities -Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. +Every entity has to have an `id` that is unique among all entities of the same type. Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. - `event.params.id.toHex()` - `event.transaction.from.toHex()` - `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` -We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. +We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. ## Code Generation @@ -523,7 +564,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with ```javascript import { @@ -535,23 +576,23 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. It must also be performed at least once before building or deploying the subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. ## Data Source Templates -A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. ### Data Source for the Main Contract -First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. +First, you define a regular data source for the main contract. First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. ```yaml dataSources: @@ -578,9 +619,13 @@ dataSources: ### Data Source Templates for Dynamically Created Contracts -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. ```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... dataSources: - kind: ethereum/contract name: Factory @@ -614,7 +659,7 @@ templates: ### Instantiating a Data Source Template -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. ```typescript import { Exchange } from '../generated/templates' @@ -632,7 +677,7 @@ export function handleNewExchange(event: NewExchange): void { ### Data Source Context -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: ```typescript import { Exchange } from '../generated/templates' @@ -657,7 +702,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -695,7 +740,7 @@ While events provide an effective way to collect relevant changes to the state o Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. +> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. Call handlers currently depend on the Parity tracing API and these networks do not support it. ### Defining a Call Handler @@ -724,11 +769,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +The `function` is the normalized function signature to filter calls by. The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -743,11 +788,11 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. +In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. ### Supported Filters @@ -758,7 +803,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. +The absense of a filter for a block handler will ensure that the handler is called every block. The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. ```yaml dataSources: @@ -787,7 +832,7 @@ dataSources: ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -810,7 +855,7 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +An event will only be triggered when both the signature and topic 0 match. An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. ## Experimental features @@ -840,7 +885,7 @@ Note that using a feature without declaring it will incur in a **validation erro A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. -Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). > **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. @@ -850,7 +895,7 @@ In order to make this easy for subgraph developers, The Graph team wrote a tool ### Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. +Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. > **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. @@ -864,7 +909,7 @@ features: ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -898,24 +943,25 @@ If the subgraph encounters an error that query will return both the data and a g ### Grafting onto Existing Subgraphs -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. -> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** Grafting requires that the Indexer has indexed the base subgraph. **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: ```yaml description: ... +description: ... graft: base: Qm... # Subgraph ID of base subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. -Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: +The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types From 7d79cc64d494fb4a9a21d51f0e11ac3e0faf2b3b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:09:41 -0500 Subject: [PATCH 356/432] New translations define-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/define-subgraph-hosted.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/developer/define-subgraph-hosted.mdx b/pages/zh/developer/define-subgraph-hosted.mdx index 17484f0deb7a..83bde5bef5de 100644 --- a/pages/zh/developer/define-subgraph-hosted.mdx +++ b/pages/zh/developer/define-subgraph-hosted.mdx @@ -2,7 +2,7 @@ title: 定义子图 --- -子图定义了Graph从以太坊索引哪些数据,以及如何存储这些数据。 子图一旦部署,就成为区块链数据全局图的一部分。 +A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. 子图一旦部署,就成为区块链数据全局图的一部分。 ![定义子图](/img/define-subgraph.png) From 5c813eb59b3c512d17329dbce0c71ce9eff90a19 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:09:46 -0500 Subject: [PATCH 357/432] New translations deprecating-a-subgraph.mdx (Chinese Simplified) --- pages/zh/developer/deprecating-a-subgraph.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/zh/developer/deprecating-a-subgraph.mdx b/pages/zh/developer/deprecating-a-subgraph.mdx index f8966e025c13..726461b4c46c 100644 --- a/pages/zh/developer/deprecating-a-subgraph.mdx +++ b/pages/zh/developer/deprecating-a-subgraph.mdx @@ -2,13 +2,13 @@ title: Deprecating a Subgraph --- -So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: +So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: You've come to the right place! Follow the steps below: 1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) 2. Call 'deprecateSubgraph' with your own address as the first parameter 3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. -4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` -5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: +4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` +5. Voila! Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: Please note the following: - Curators will not be able to signal on the subgraph anymore - Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price From e2029c5756200a7ba06f275f9dca3b061c79b9fa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:09:51 -0500 Subject: [PATCH 358/432] New translations developer-faq.mdx (Chinese Simplified) --- pages/zh/developer/developer-faq.mdx | 88 ++++++++++++++-------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/pages/zh/developer/developer-faq.mdx b/pages/zh/developer/developer-faq.mdx index 41449c60e5ab..58380c271633 100644 --- a/pages/zh/developer/developer-faq.mdx +++ b/pages/zh/developer/developer-faq.mdx @@ -2,35 +2,35 @@ title: Developer FAQs --- -### 1. Can I delete my subgraph? +### 1. 1. Can I delete my subgraph? It is not possible to delete subgraphs once they are created. -### 2. Can I change my subgraph name? +### 2. 2. Can I change my subgraph name? No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. -### 3. Can I change the GitHub account associated with my subgraph? +### 3. 3. Can I change the GitHub account associated with my subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +No. No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. Make sure to think of this carefully before you create your subgraph. -### 4. Am I still able to create a subgraph if my smart contracts don't have events? +### 4. 4. Am I still able to create a subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. +If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. Although this is not recommended as performance will be significantly slower. -### 5. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. 5. Is it possible to deploy one subgraph with the same name for multiple networks? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +You will need separate names for multiple networks. You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. How are templates different from data sources? +### 6. 6. How are templates different from data sources? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources on the fly, while your subgraph is indexing. Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). -### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 7. 7. How do I make sure I'm using the latest version of graph-node for my local deployments? You can run the following command: @@ -40,31 +40,31 @@ docker pull graphprotocol/graph-node:latest **NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. -### 8. How do I call a contract function or access a public state variable from my subgraph mappings? +### 8. 8. How do I call a contract function or access a public state variable from my subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). -### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 9. 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? -Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. +Unfortunately this is currently not possible. Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. -### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? +### 10. 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 11. 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 12. 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? +### 13. 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? -Yes. You can do this by importing `graph-ts` as per the example below: +Yes. Yes. You can do this by importing `graph-ts` as per the example below: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,31 +73,31 @@ dataSource.network() dataSource.address() ``` -### 14. Do you support block and call handlers on Rinkeby? +### 14. 14. Do you support block and call handlers on Rinkeby? -On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. +On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. Call handlers are not supported for the time being. -### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? +### 15. 15. Can I import ethers.js or other JS libraries into my subgraph mappings? -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Not currently, as mappings are written in AssemblyScript. Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. -### 16. Is it possible to specifying what block to start indexing on? +### 16. 16. Is it possible to specifying what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks +Yes. Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks In most cases we suggest using the block in which the contract was created: Start blocks -### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. +### 17. 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. My subgraph is taking a very long time to sync. Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) -### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? +### 18. 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? -Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +Yes! Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. What networks are supported by The Graph? +### 19. 19. What networks are supported by The Graph? The graph-node supports any EVM-compatible JSON RPC API chain. @@ -135,38 +135,38 @@ In the Hosted Service, the following networks are supported: There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). -### 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 20. 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 21. Is this possible to use Apollo Federation on top of graph-node? +### 21. 21. Is this possible to use Apollo Federation on top of graph-node? -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. +Federation is not supported yet, although we do want to support it in the future. Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. 22. Is there a limit to how many objects The Graph can return per query? -By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: +By default query responses are limited to 100 items per collection. By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a host name, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -### 24. Where do I go to find my current subgraph on the Hosted Service? +### 24. 24. Where do I go to find my current subgraph on the Hosted Service? -Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) +Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) You can find it [here.](https://thegraph.com/hosted-service) -### 25. Will the Hosted Service start charging query fees? +### 25. 25. Will the Hosted Service start charging query fees? -The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. +The Graph will never charge for the Hosted Service. The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. -### 26. When will the Hosted Service be shut down? +### 26. 26. When will the Hosted Service be shut down? If and when there are plans to do this, the community will be notified well ahead of time with considerations made for any subgraphs built on the Hosted Service. -### 27. How do I upgrade a subgraph on mainnet? +### 27. 27. How do I upgrade a subgraph on mainnet? -If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. From df0a4ddce2e67fd6da299f36fc470c423e44b4e2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:01 -0500 Subject: [PATCH 359/432] New translations introduction.mdx (Chinese Simplified) --- pages/zh/about/introduction.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/about/introduction.mdx b/pages/zh/about/introduction.mdx index 70d4142f38ec..be87ededcc82 100644 --- a/pages/zh/about/introduction.mdx +++ b/pages/zh/about/introduction.mdx @@ -6,21 +6,21 @@ title: 介绍 ## 什么是The Graph -The Graph是一个去中心化的协议,用于索引和查询区块链的数据,首先是从以太坊开始的。 它使查询那些难以直接查询的数据成为可能。 +The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. 它使查询那些难以直接查询的数据成为可能。 像 [Uniswap](https://uniswap.org/)这样具有复杂智能合约的项目,以及像 [Bored Ape Yacht Club](https://boredapeyachtclub.com/) 这样的NFTs倡议,都在以太坊区块链上存储数据,因此,除了直接从区块链上读取基本数据外,真的很难。 -在Bored Ape Yacht Club的案例中,我们可以对 [合约](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code)进行基本的读取操作,比如获得某个Ape的所有者,根据他们的ID获得某个Ape的内容URI,或者总供应量,因为这些读取操作是直接编入智能合约的,但是更高级的现实世界的查询和操作,比如聚合、搜索、关系和非粗略的过滤是不可能的。 例如,如果我们想查询某个地址所拥有的apes,并通过它的某个特征进行过滤,我们将无法通过直接与合约本身进行交互来获得该信息。 +In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. 例如,如果我们想查询某个地址所拥有的apes,并通过它的某个特征进行过滤,我们将无法通过直接与合约本身进行交互来获得该信息。 -为了获得这些数据,你必须处理曾经发出的每一个 [`传输`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) 事件,使用Token ID和IPFS的哈希值从IPFS读取元数据,然后将其汇总。 即使是这些类型的相对简单的问题,在浏览器中运行的去中心化应用程序(dapp)也需要**几个小时甚至几天** 才能得到答案。 +To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. 即使是这些类型的相对简单的问题,在浏览器中运行的去中心化应用程序(dapp)也需要**几个小时甚至几天** 才能得到答案。 -你也可以建立你自己的服务器,在那里处理交易,把它们保存到数据库,并在上面建立一个API终端,以便查询数据。 然而,这种选择是资源密集型的,需要维护,会出现单点故障,并破坏了去中心化化所需的重要安全属性。 +You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. 然而,这种选择是资源密集型的,需要维护,会出现单点故障,并破坏了去中心化化所需的重要安全属性。 **为区块链数据编制索引真的非常非常难。** 区块链的属性,如最终性、链重组或未封闭的区块,使这一过程进一步复杂化,并使从区块链数据中检索出正确的查询结果不仅耗时,而且在概念上也很难。 -The Graph通过一个去中心化的协议解决了这一问题,该协议可以对区块链数据进行索引并实现高性能和高效率的查询。 这些API(索引的 "子图")然后可以用标准的GraphQL API进行查询。 今天,有一个托管服务,也有一个具有相同功能的分去中心化协议。 两者都由 [](https://github.com/graphprotocol/graph-node)Graph Node +The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). 这些API(索引的 "子图")然后可以用标准的GraphQL API进行查询。 今天,有一个托管服务,也有一个具有相同功能的分去中心化协议。 两者都由 [](https://github.com/graphprotocol/graph-node)Graph Node 的开放源码实现支持。

@@ -28,7 +28,7 @@ The Graph通过一个去中心化的协议解决了这一问题,该协议可 ## The Graph是如何工作的 -Graph根据子图描述(称为子图清单)来学习什么以及如何为以太坊数据建立索引。 子图描述定义了子图所关注的智能合约,这些合约中需要关注的事件,以及如何将事件数据映射到The Graph将存储在其数据库中的数据。 +The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. 子图描述定义了子图所关注的智能合约,这些合约中需要关注的事件,以及如何将事件数据映射到The Graph将存储在其数据库中的数据。 一旦你写好了 `子图清单 `,你就可以使用Graph CLI将该定义存储在IPFS中,并告诉索引人开始为该子图编制索引数据。 @@ -41,8 +41,8 @@ Graph根据子图描述(称为子图清单)来学习什么以及如何为以 1. 一个去中心化的应用程序通过智能合约上的交易向以太坊添加数据。 2. 智能合约在处理交易时,会发出一个或多个事件。 3. Graph节点不断扫描以太坊的新区块和它们可能包含的子图的数据。 -4. Graph 节点在这些区块中为你的子图找到Ethereum事件并运行你提供的映射处理程序。 映射是一个WASM模块,它创建或更新Graph Node存储的数据实体,以响应Ethereum事件。 -5. 去中心化的应用程序使用节点的[GraphQL端点](https://graphql.org/learn/),从区块链的索引中查询Graph节点的数据。 Graph节点反过来将GraphQL查询转化为对其底层数据存储的查询,以便利用存储的索引功能来获取这些数据。 去中心化的应用程序在一个丰富的用户界面中为终端用户显示这些数据,他们用这些数据在以太坊上发行新的交易。 就这样周而复始。 +4. Graph 节点在这些区块中为你的子图找到Ethereum事件并运行你提供的映射处理程序。 Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. Graph节点反过来将GraphQL查询转化为对其底层数据存储的查询,以便利用存储的索引功能来获取这些数据。 去中心化的应用程序在一个丰富的用户界面中为终端用户显示这些数据,他们用这些数据在以太坊上发行新的交易。 就这样周而复始。 @@ -50,4 +50,4 @@ Graph根据子图描述(称为子图清单)来学习什么以及如何为以 在下面的章节中,我们将更详细地介绍如何定义子图,如何部署它们,以及如何从Graph 节点建立的索引中查询数据。 -在你开始编写你自己的子图之前,你可能想看一下Graph浏览器,探索一些已经部署的子图。 每个子图的页面都包含一个操作面板,让你用GraphQL查询该子图的数据。 +在你开始编写你自己的子图之前,你可能想看一下Graph浏览器,探索一些已经部署的子图。 Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. From 10735859870202b84ca3b6fd30cc4171c3fa6813 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:08 -0500 Subject: [PATCH 360/432] New translations network.mdx (Chinese Simplified) --- pages/zh/about/network.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/zh/about/network.mdx b/pages/zh/about/network.mdx index 7cdb059d6279..aecba9459f15 100644 --- a/pages/zh/about/network.mdx +++ b/pages/zh/about/network.mdx @@ -2,14 +2,14 @@ title: 网络概述 --- -The Graph网络是一个去中心化的索引协议,用于组织区块链数据。 应用程序使用GraphQL查询称为子图的开放API,以检索网络上的索引数据。 通过The Graph,开发者可以建立完全在公共基础设施上运行的无服务器应用程序。 +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. 应用程序使用GraphQL查询称为子图的开放API,以检索网络上的索引数据。 通过The Graph,开发者可以建立完全在公共基础设施上运行的无服务器应用程序。 > Grt合约地址:[0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) ## 概述 -The Graph网络由索引人、策展人和委托人组成,为网络提供服务,并为Web3应用程序提供数据。 消费者使用应用程序并消费数据。 +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. 消费者使用应用程序并消费数据。 ![代币经济学](/img/Network-roles@2x.png) -为了确保The Graph 网络的经济安全和被查询数据的完整性,参与者将Graph 令牌(GRT)质押并使用。 GRT是一种工作代币,是以太坊区块链上的ERC-20,用于分配网络中的资源。 活跃的索引人、策展人和委托人可以提供服务,并从网络中获得收入,与他们的工作量和他们的GRT委托量成正比。 +为了确保The Graph 网络的经济安全和被查询数据的完整性,参与者将Graph 令牌(GRT)质押并使用。 To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. 活跃的索引人、策展人和委托人可以提供服务,并从网络中获得收入,与他们的工作量和他们的GRT委托量成正比。 From 4d77da48ee33a4084a2a463611f55afc935a9fef Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:18 -0500 Subject: [PATCH 361/432] New translations assemblyscript-api.mdx (Chinese Simplified) --- pages/zh/developer/assemblyscript-api.mdx | 90 +++++++++++++---------- 1 file changed, 52 insertions(+), 38 deletions(-) diff --git a/pages/zh/developer/assemblyscript-api.mdx b/pages/zh/developer/assemblyscript-api.mdx index b5066fab02f2..8545d8d60c8f 100644 --- a/pages/zh/developer/assemblyscript-api.mdx +++ b/pages/zh/developer/assemblyscript-api.mdx @@ -4,16 +4,16 @@ title: AssemblyScript API > Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: Two kinds of APIs are available out of the box: - the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and - code generated from subgraph files by `graph codegen`. -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. ## Installation -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: +Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: All that is required to install these dependencies is to run one of the following commands: ```sh yarn install # Yarn @@ -41,7 +41,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. +The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. The current mapping API version is 0.0.6. | Version | Release notes | |:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | @@ -68,15 +68,15 @@ import { ByteArray } from '@graphprotocol/graph-ts' _Construction_ - `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. Prefixing with `0x` is optional. _Type conversions_ - `toHexString(): string` - Converts to a hex string prefixed with `0x`. - `toString(): string` - Interprets the bytes as a UTF-8 string. - `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. Throws in case of overflow. +- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. Throws in case of overflow. _Operators_ @@ -119,7 +119,7 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. The `BigInt` class has the following API: @@ -127,14 +127,14 @@ _Construction_ - `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. - `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. If your input is big-endian, call `.reverse()` first. _Type conversions_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. - `x.toString(): string` – turns `BigInt` into a decimal number string. -- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. +- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. It's a good idea to first check `x.isI32()`. - `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. _Math_ @@ -167,7 +167,7 @@ _Math_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). The `TypedMap` class has the following API: @@ -183,7 +183,7 @@ The `TypedMap` class has the following API: import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. +`Bytes` is used to represent arbitrary-length arrays of bytes. `Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: @@ -211,7 +211,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -241,9 +241,9 @@ export function handleTransfer(event: TransferEvent): void { } ``` -When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -259,16 +259,16 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. It may thus be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. See the next section for the two ways of updating existing entities. #### Updating existing entities There are two ways to update an existing entity: 1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. If the entity already exists, the changes are merged into it. Changing properties is straight forward in most cases, thanks to the generated property setters: @@ -277,6 +277,8 @@ let transfer = new Transfer(id) transfer.from = ... transfer.to = ... transfer.amount = ... +transfer.to = ... +transfer.amount = ... ``` It is also possible to unset properties with one of the following two instructions: @@ -286,9 +288,9 @@ transfer.from.unset() transfer.from = null ``` -This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. Two examples would be `owner: Bytes` or `amount: BigInt`. -Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. ```typescript // This won't work @@ -304,11 +306,13 @@ entity.save() #### Removing entities from the store -There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: +There is currently no way to remove an entity via the generated types. There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' ... +import { store } from '@graphprotocol/graph-ts' +... let id = event.transaction.hash.toHex() store.remove('Transfer', id) ``` @@ -319,17 +323,20 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a subgraph schema like Given a subgraph schema like ```graphql type Transfer @entity { from: Bytes! to: Bytes! amount: BigInt! +} + to: Bytes! + amount: BigInt! } ``` @@ -346,7 +353,7 @@ transfer.save() #### Events and Block/Transaction Data -Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): ```typescript class Event { @@ -392,9 +399,9 @@ class Transaction { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. -A common pattern is to access the contract from which an event originates. This is achieved with the following code: +A common pattern is to access the contract from which an event originates. This is achieved with the following code: This is achieved with the following code: ```typescript // Import the generated contract class @@ -411,13 +418,13 @@ export function handleTransfer(event: Transfer) { } ``` -As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. +As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. For public state variables a method with the same name is created automatically. Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -447,6 +454,8 @@ let tuple = tupleArray as ethereum.Tuple let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! +let decoded = ethereum.decode('(address,uint256)', encoded) + let decoded = ethereum.decode('(address,uint256)', encoded) ``` @@ -462,7 +471,7 @@ For more information: import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -472,7 +481,7 @@ The `log` API includes the following functions: - `log.error(fmt: string, args: Array): void` - logs an error message. - `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. -The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. +The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) @@ -508,7 +517,7 @@ export function handleSomeEvent(event: SomeEvent): void { #### Logging multiple entries from an existing array -Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. +Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. ```typescript let myArray = ['A', 'B', 'C'] @@ -543,6 +552,9 @@ export function handleSomeEvent(event: SomeEvent): void { event.block.hash.toHexString(), // "0x..." event.transaction.hash.toHexString(), // "0x..." ]) +} + event.transaction.hash.toHexString(), // "0x..." + ]) } ``` @@ -552,7 +564,7 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. Given an IPFS hash or path, reading a file from IPFS is done as follows: @@ -569,7 +581,7 @@ let data = ipfs.cat(path) **Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -599,9 +611,9 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. ### Crypto API @@ -609,7 +621,7 @@ On success, `ipfs.map` returns `void`. If any invocation of the callback causes import { crypto } from '@graphprotocol/graph-ts' ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: +The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: Right now, there is only one: - `crypto.keccak256(input: ByteArray): ByteArray` @@ -626,13 +638,15 @@ JSON data can be parsed using the `json` API: - `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` - `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: ```typescript let value = json.fromBytes(...) +let value = json.fromBytes(...) if (value.kind == JSONValueKind.BOOL) { ... } +} ``` In addition, there is a method to check if the value is `null`: From c4cc4ac5e5d58e0a5b9a5b01c89bee792f410274 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:24 -0500 Subject: [PATCH 362/432] New translations assemblyscript-migration-guide.mdx (Chinese Simplified) --- .../assemblyscript-migration-guide.mdx | 59 +++++++++++++++---- 1 file changed, 49 insertions(+), 10 deletions(-) diff --git a/pages/zh/developer/assemblyscript-migration-guide.mdx b/pages/zh/developer/assemblyscript-migration-guide.mdx index 2db90a608110..592fcdee6d94 100644 --- a/pages/zh/developer/assemblyscript-migration-guide.mdx +++ b/pages/zh/developer/assemblyscript-migration-guide.mdx @@ -2,11 +2,11 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 That will enable subgraph developers to use newer features of the AS language and standard library. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 > Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. @@ -48,6 +48,11 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ```yaml ... +dataSources: + ... + mapping: + ... + ... dataSources: ... mapping: @@ -101,12 +106,12 @@ if (maybeValue) { Or force it like this: ```typescript -let maybeValue = load()! // breaks in runtime if value is null +let maybeValue = load()! let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. ### Variable Shadowing @@ -135,6 +140,9 @@ By doing the upgrade on your subgraph, sometimes you might get errors like these ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. if (decimals == null) { ~~~~ + in src/mappings/file.ts(41,21) + if (decimals == null) { + ~~~~ in src/mappings/file.ts(41,21) ``` To solve you can simply change the `if` statement to something like this: @@ -253,6 +261,16 @@ let somethingOrElse = something ? something : 'else' let somethingOrElse +if (something) { + somethingOrElse = something +} else { + somethingOrElse = 'else' +} something : 'else' + +// or + +let somethingOrElse + if (something) { somethingOrElse = something } else { @@ -270,7 +288,7 @@ class Container { let container = new Container() container.data = 'data' -let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile +let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile container.data : 'else' // doesn't compile ``` Which outputs this error: @@ -280,6 +298,9 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + let somethingOrElse: string = container.data ? container.data : "else"; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: @@ -293,7 +314,7 @@ container.data = 'data' let data = container.data -let somethingOrElse: string = data ? data : 'else' // compiles just fine :) +let somethingOrElse: string = data ? data : 'else' // compiles just fine :) data : 'else' // compiles just fine :) ``` ### Operator overloading with property access @@ -302,6 +323,10 @@ If you try to sum (for example) a nullable type (from a property access) with a ```typescript class BigInt extends Uint8Array { + @operator('+') + plus(other: BigInt): BigInt { + // ... + class BigInt extends Uint8Array { @operator('+') plus(other: BigInt): BigInt { // ... @@ -373,7 +398,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: So you either initialize it first: ```typescript let total = Total.load('latest') @@ -392,6 +417,8 @@ Or you can just change your GraphQL schema to not use a nullable type for this p type Total @entity { id: ID! amount: BigInt! +} + amount: BigInt! } ``` @@ -445,13 +472,19 @@ export class Something { This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: +Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: If you have a schema like this: ```graphql type Something @entity { id: ID! } +type MyEntity @entity { + id: ID! + invalidField: [Something]! # no longer valid +} +} + type MyEntity @entity { id: ID! invalidField: [Something]! # no longer valid @@ -465,6 +498,12 @@ type Something @entity { id: ID! } +type MyEntity @entity { + id: ID! + invalidField: [Something]! # no longer valid +} +} + type MyEntity @entity { id: ID! invalidField: [Something!]! # valid @@ -478,7 +517,7 @@ This changed because of nullability differences between AssemblyScript versions, - Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) - Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- The result of a `**` binary operation is now the common denominator integer if both operands are integers. The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) - Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) - Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) From da6d15057c2420000f5c6d2cf3f3b3b60b11b2c5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:36 -0500 Subject: [PATCH 363/432] New translations quick-start.mdx (Chinese Simplified) --- pages/zh/developer/quick-start.mdx | 46 ++++++++++++++++-------------- 1 file changed, 24 insertions(+), 22 deletions(-) diff --git a/pages/zh/developer/quick-start.mdx b/pages/zh/developer/quick-start.mdx index 5c07399604fd..e8d4315103fe 100644 --- a/pages/zh/developer/quick-start.mdx +++ b/pages/zh/developer/quick-start.mdx @@ -9,7 +9,7 @@ This guide will quickly take you through how to initialize, create, and deploy y ## Subgraph Studio -### 1. 安装Graph CLI +### 1. 1. Install the Graph CLI The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. @@ -21,7 +21,7 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. 2. Initialize your Subgraph - Initialize your subgraph from an existing contract. @@ -29,13 +29,13 @@ $ yarn global add @graphprotocol/graph-cli graph init --studio ``` -- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. +- Your subgraph slug is an identifier for your subgraph. Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. ![Subgraph command](/img/Subgraph-Slug.png) -### 3. Write your Subgraph +### 3. 3. Write your Subgraph -The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. - Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. @@ -43,7 +43,7 @@ The previous commands creates a scaffold subgraph that you can use as a starting For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. Deploy to the Subgraph Studio +### 4. 4. Deploy to the Subgraph Studio - Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. - Click "Create" and enter the subgraph slug you used in step 2. @@ -54,7 +54,7 @@ $ graph codegen $ graph build ``` -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +- Authenticate and deploy your subgraph. Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. ```sh $ graph auth --studio @@ -63,12 +63,13 @@ $ graph deploy --studio - You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` -### 5. Check your logs +### 5. 5. Check your logs -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +The logs should tell you if there are any errors. The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: ```sh { + indexingStatuses(subgraphs: ["Qm..."]) { indexingStatuses(subgraphs: ["Qm..."]) { node synced @@ -109,13 +110,13 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. 6. Query your Subgraph -You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). +You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). ## Hosted Service -### 1. 安装Graph CLI +### 1. 1. Install the Graph CLI "The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. @@ -127,7 +128,7 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. 2. Initialize your Subgraph - Initialize your subgraph from an existing contract. @@ -135,7 +136,7 @@ $ yarn global add @graphprotocol/graph-cli $ graph init --product hosted-service --from-contract
``` -- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` +- You will be asked for a subgraph name. You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` Ex: `graphprotocol/examplesubgraph` - If you'd like to initialize from an example, run the command below: @@ -145,9 +146,9 @@ $ graph init --product hosted-service --from-example - In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -### 3. Write your Subgraph +### 3. 3. Write your Subgraph -The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: +The previous command will have created a scaffold from where you can build your subgraph. The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index - Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph @@ -155,10 +156,10 @@ The previous command will have created a scaffold from where you can build your For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. Deploy your Subgraph +### 4. 4. Deploy your Subgraph - Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account -- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. +- Click Add Subgraph and fill out the required information. Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. - Run codegen in the subgraph folder ```sh @@ -169,19 +170,20 @@ $ npm run codegen $ yarn codegen ``` -- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. +- Add your Access token and deploy your subgraph. Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. ```sh $ graph auth --product hosted-service $ graph deploy --product hosted-service / ``` -### 5. Check your logs +### 5. 5. Check your logs -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +The logs should tell you if there are any errors. The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: ```sh { + indexingStatuses(subgraphs: ["Qm..."]) { indexingStatuses(subgraphs: ["Qm..."]) { node synced @@ -222,6 +224,6 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. 6. Query your Subgraph Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. From 519733a2e5b09f98a3f5d4683b4a18fbf524788f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:37 -0500 Subject: [PATCH 364/432] New translations query-the-graph.mdx (Chinese Simplified) --- pages/zh/developer/query-the-graph.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/zh/developer/query-the-graph.mdx b/pages/zh/developer/query-the-graph.mdx index ae480b1e6883..e94221883503 100644 --- a/pages/zh/developer/query-the-graph.mdx +++ b/pages/zh/developer/query-the-graph.mdx @@ -8,7 +8,7 @@ An example is provided below, but please see the [Query API](/developer/graphql- #### Example -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +This query lists all the counters our mapping has created. This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { @@ -21,12 +21,12 @@ This query lists all the counters our mapping has created. Since we only create ## Using The Graph Explorer -Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. +Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. ![Query Subgraph Pane](/img/query-subgraph-pane.png) -As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). +As you can notice, this query URL must use a unique API key. As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). -Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). +Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). You can learn more about billing [here](/studio/billing). You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. From e88c425779ebb89816b38c64a34cae2737285349 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:42 -0500 Subject: [PATCH 365/432] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- .../hosted-service/deploy-subgraph-hosted.mdx | 44 ++++++++++++------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index 5fe5ccacae0e..527174738779 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -2,25 +2,25 @@ title: 将子图部署到托管服务上 --- -如果您尚未查看,请先查看如何编写组成 [子图清单](/developer/create-subgraph-hosted#the-subgraph-manifest) 的文件以及如何安装 [Graph CLI](https://github.com/graphprotocol/graph-cli) 为您的子图生成代码。 现在,让我们将您的子图部署到托管服务上。 +If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. 现在,让我们将您的子图部署到托管服务上。 ## 创建托管服务帐户 -在使用托管服务之前,请先在我们的托管服务中创建一个帐户。 为此,您将需要一个 [Github](https://github.com/) 帐户;如果您还没有,您需要先创建一个账户。 然后,导航到 [托管服务](https://thegraph.com/hosted-service/), 单击 _'使用 Github 注册'_ 按钮并完成 Github 的授权流程。 +在使用托管服务之前,请先在我们的托管服务中创建一个帐户。 为此,您将需要一个 [Github](https://github.com/) 帐户;如果您还没有,您需要先创建一个账户。 Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. ## 存储访问令牌 -创建帐户后,导航到您的 [仪表板](https://thegraph.com/hosted-service/dashboard)。 复制仪表板上显示的访问令牌并运行 `graph auth --product hosted-service `。 这会将访问令牌存储在您的计算机上。 如果您不需要重新生成访问令牌,您就只需要这样做一次。 +创建帐户后,导航到您的 [仪表板](https://thegraph.com/hosted-service/dashboard)。 After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. 这会将访问令牌存储在您的计算机上。 如果您不需要重新生成访问令牌,您就只需要这样做一次。 ## 在托管服务上创建子图 -在部署子图之前,您需要在 The Graph Explorer 中创建它。 转到 [仪表板](https://thegraph.com/hosted-service/dashboard) ,单击 _'添加子图'_ 按钮,并根据需要填写以下信息: +在部署子图之前,您需要在 The Graph Explorer 中创建它。 Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: **图像** - 选择要用作子图的预览图和缩略图的图像。 -**子图名称** - 子图名称连同下面将要创建的子图帐户名称,将定义用于部署和 GraphQL 端点的`account-name/subgraph-name`样式名称。 _此字段以后无法更改。_ +**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ _此字段以后无法更改。_ -**帐户** - 创建子图的帐户。 这可以是个人或组织的帐户。 _以后不能在帐户之间移动子图。_ +**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ 这可以是个人或组织的帐户。 _以后不能在帐户之间移动子图。_ **副标题** - 将出现在子图卡中的文本。 @@ -30,7 +30,7 @@ title: 将子图部署到托管服务上 **隐藏** - 打开此选项可隐藏Graph Explorer中的子图。 -保存新子图后,您会看到一个屏幕,其中包含有关如何安装 Graph CLI、如何为新子图生成脚手架以及如何部署子图的帮助信息。 前面两部分在[定义子图](/developer/define-subgraph-hosted)中进行了介绍。 +After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). 前面两部分在[定义子图](/developer/define-subgraph-hosted)中进行了介绍。 ## 在托管服务上部署子图 @@ -38,25 +38,26 @@ title: 将子图部署到托管服务上 您可以通过运行 `yarn deploy`来部署子图。 -部署子图后,Graph Explorer将切换到显示子图的同步状态。 根据需要从历史以太坊区块中提取的数据量和事件数量的不同,从创世区块开始,同步操作可能需要几分钟到几个小时。 一旦 Graph节点从历史区块中提取了所有数据,子图状态就会切换到`Synced`。 当新的以太坊区块出现时,Graph节点将继续按照子图的要求检查这些区块的信息。 +部署子图后,Graph Explorer将切换到显示子图的同步状态。 根据需要从历史以太坊区块中提取的数据量和事件数量的不同,从创世区块开始,同步操作可能需要几分钟到几个小时。 一旦 Graph节点从历史区块中提取了所有数据,子图状态就会切换到`Synced`。 After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. ## 重新部署子图 -更改子图定义后,例如:修复实体映射中的一个问题,再次运行上面的 `yarn deploy` 命令可以部署新版本的子图。 子图的任何更新都需要Graph节点再次从创世块开始重新索引您的整个子图。 +When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. 子图的任何更新都需要Graph节点再次从创世块开始重新索引您的整个子图。 -如果您之前部署的子图仍处于`Syncing`状态,系统则会立即将其替换为新部署的版本。 如果之前部署的子图已经完全同步,Graph节点会将新部署的版本标记为`Pending Version`,在后台进行同步,只有在新版本同步完成后,才会用新的版本替换当前部署的版本。 这样做可以确保在新版本同步时您仍然有子图可以使用。 +如果您之前部署的子图仍处于`Syncing`状态,系统则会立即将其替换为新部署的版本。 If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. 这样做可以确保在新版本同步时您仍然有子图可以使用。 ### 将子图部署到多个以太坊网络 -在某些情况下,您可能希望将相同的子图部署到多个以太坊网络,而无需复制其所有代码。 这样做的主要挑战是这些网络上的合约地址不同。 允许参数化合约地址等配置的一种解决方案是使用 [Mustache](https://mustache.github.io/)或 [Handlebars](https://handlebarsjs.com/)等模板系统。 +In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). 这样做的主要挑战是这些网络上的合约地址不同。 允许参数化合约地址等配置的一种解决方案是使用 [Mustache](https://mustache.github.io/)或 [Handlebars](https://handlebarsjs.com/)等模板系统。 -为了说明这种方法,我们假设使用不同的合约地址将子图部署到主网和 Ropsten上。 您可以定义两个配置文件,为每个网络提供相应的地址: +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: 您可以定义两个配置文件,为每个网络提供相应的地址: ```json { "network": "mainnet", "address": "0x123..." } +} ``` 和 @@ -66,12 +67,14 @@ title: 将子图部署到托管服务上 "network": "ropsten", "address": "0xabc..." } +} ``` 除此之外,您可以用变量占位符 `{{network}}` 和 `{{address}}` 替换清单中的网络名称和地址,并将清单重命名为例如 `subgraph.template.yaml`: ```yaml # ... +# ... dataSources: - kind: ethereum/contract name: Gravity @@ -90,6 +93,10 @@ dataSources: ```json { ... + "scripts": { + ... + { + ... "scripts": { ... "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", @@ -99,6 +106,9 @@ dataSources: ... "mustache": "^3.1.0" } +} + "mustache": "^3.1.0" + } } ``` @@ -118,9 +128,9 @@ yarn prepare:ropsten && yarn deploy ## 检查子图状态 -如果子图成功同步,这是表明它将运行良好的一个好的信号。 但是,链上的新事件可能会导致您的子图遇到未经测试的错误环境,或者由于性能或节点方面的问题而开始落后于链上数据。 +如果子图成功同步,这是表明它将运行良好的一个好的信号。 If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph 节点公开了一个 graphql 端点,您可以通过查询该端点来检查子图的状态。 在托管服务上,该端点的链接是 `https://api.thegraph.com/index-node/graphql`。 在本地节点上,默认情况下该端点在端口 `8030/graphql` 上可用。 该端点的完整数据模式可以在[此处](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql)找到。 这是一个检查子图当前版本状态的示例查询: +Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: 在托管服务上,该端点的链接是 `https://api.thegraph.com/index-node/graphql`。 在本地节点上,默认情况下该端点在端口 `8030/graphql` 上可用。 该端点的完整数据模式可以在[此处](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql)找到。 这是一个检查子图当前版本状态的示例查询: ```graphql { @@ -147,14 +157,14 @@ Graph 节点公开了一个 graphql 端点,您可以通过查询该端点来 } ``` -这将为您提供 `chainHeadBlock`,您可以将其与子图上的 `latestBlock` 进行比较,以检查子图是否落后。 通过`synced`,可以了解子图是否与链上数据完全同步。 如果子图没有发生错误,`health` 将返回`healthy`,如果有一个错误导致子图的同步进度停止,那么 `health`将返回`failed` 。 在这种情况下,您可以检查 `fatalError` 字段以获取有关此错误的详细信息。 +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. 通过`synced`,可以了解子图是否与链上数据完全同步。 如果子图没有发生错误,`health` 将返回`healthy`,如果有一个错误导致子图的同步进度停止,那么 `health`将返回`failed` 。 在这种情况下,您可以检查 `fatalError` 字段以获取有关此错误的详细信息。 ## 子图归档策略 -托管服务是一个免费的Graph节点索引器。 开发人员可以部署索引一系列网络的子图,这些网络将被索引,并可以通过 graphQL 进行查询。 +托管服务是一个免费的Graph节点索引器。 The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. 为了提高活跃子图的服务性能,托管服务将归档不活跃的子图。 **如果一个子图在 45 天前部署到托管服务,并且在过去 30 天内收到 0 个查询,则将其定义为“不活跃”。** -如果开发人员的一个子图被标记为不活跃,并将 7 天后被删除,托管服务会通过电子邮件通知开发者。 如果他们希望“激活”他们的子图,他们可以通过在其子图的托管服务 graphQL playground中发起查询来实现。 如果再次需要使用这个子图,开发人员也可以随时重新部署存档的子图。 +Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. 如果他们希望“激活”他们的子图,他们可以通过在其子图的托管服务 graphQL playground中发起查询来实现。 如果再次需要使用这个子图,开发人员也可以随时重新部署存档的子图。 From 9e2ce87c1b18324543116539baf1b119e577401f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:50 -0500 Subject: [PATCH 366/432] New translations distributed-systems.mdx (Chinese Simplified) --- pages/zh/developer/distributed-systems.mdx | 54 ++++++++++++++++++++-- 1 file changed, 49 insertions(+), 5 deletions(-) diff --git a/pages/zh/developer/distributed-systems.mdx b/pages/zh/developer/distributed-systems.mdx index 894fcbe2e18b..ae06b86555f7 100644 --- a/pages/zh/developer/distributed-systems.mdx +++ b/pages/zh/developer/distributed-systems.mdx @@ -21,17 +21,17 @@ Consider this example of what may occur if a client polls an Indexer for the lat From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. ## Polling for updated data -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: Here is an example: ```javascript /// Updates the protocol.paused variable to the latest @@ -42,6 +42,17 @@ async function updateProtocolPaused() { // same as leaving out that argument. let minBlock = 0 + for (;;) { + // Schedule a promise that will be ready once + // the next Ethereum block will likely be available. + /// Updates the protocol.paused variable to the latest +/// known value in a loop by fetching it using The Graph. +async function updateProtocolPaused() { + // It's ok to start with minBlock at 0. The query will be served + // using the latest block available. Setting minBlock to 0 is the + // same as leaving out that argument. + let minBlock = 0 + for (;;) { // Schedule a promise that will be ready once // the next Ethereum block will likely be available. @@ -71,11 +82,17 @@ async function updateProtocolPaused() { await nextBlock } } + console.log(response.protocol.paused) + + // Sleep to wait for the next block + await nextBlock + } +} ``` ## Fetching a set of related items -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. @@ -86,6 +103,14 @@ async function getDomainNames() { let pages = 5 const perPage = 1000 + // The first query will get the first page of results and also get the block + // hash so that the remainder of the queries are consistent with the first. + /// Gets a list of domain names from a single block using pagination +async function getDomainNames() { + // Set a cap on the maximum number of items to pull. + let pages = 5 + const perPage = 1000 + // The first query will get the first page of results and also get the block // hash so that the remainder of the queries are consistent with the first. let query = ` @@ -126,6 +151,25 @@ async function getDomainNames() { } } return result +} + while (data.domains.length == perPage && --pages) { + let lastID = data.domains[data.domains.length - 1].id + query = ` + { + domains(first: ${perPage}, where: { id_gt: "${lastID}" }, block: { hash: "${blockHash}" }) { + name + id + } + }` + + data = await graphql(query) + + // Accumulate domain names into the result + for (domain of data.domains) { + result.push(domain.name) + } + } + return result } ``` From 3cce550dd1e96c62ef919ff602955a10b940717c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:10:55 -0500 Subject: [PATCH 367/432] New translations graphql-api.mdx (Chinese Simplified) --- pages/zh/developer/graphql-api.mdx | 34 +++++++++++++++--------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/pages/zh/developer/graphql-api.mdx b/pages/zh/developer/graphql-api.mdx index f9cb6214fcd9..d835b27e91b3 100644 --- a/pages/zh/developer/graphql-api.mdx +++ b/pages/zh/developer/graphql-api.mdx @@ -6,7 +6,7 @@ This guide explains the GraphQL Query API that is used for the Graph Protocol. ## Queries -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +In your subgraph schema you define types called `Entities`. In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. #### Examples @@ -36,7 +36,7 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. #### Example @@ -51,11 +51,11 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. +When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +Further, the `skip` parameter can be used to skip entities and paginate. Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. #### Example @@ -87,7 +87,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: For example, a client would retrieve a large number of tokens using this query: ```graphql { @@ -100,11 +100,11 @@ If a client needs to retrieve a large number of entities, it is much more perfor } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. You can filter on mulltiple values within the `where` parameter. #### Example @@ -154,13 +154,13 @@ _not_starts_with _not_ends_with ``` -Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. +Please note that some suffixes are only supported for specific types. Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. ### Time-travel queries -You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. Once a block can be considered final, the result of the query will not change. Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. @@ -198,9 +198,9 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: @@ -226,7 +226,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" The following query will return all blogs with variations of "decentralize" followed by "philosophy" ```graphql { @@ -239,7 +239,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +Combine fulltext operators to make more complex filters. Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". ```graphql { @@ -256,7 +256,7 @@ Combine fulltext operators to make more complex filters. With a pretext search o The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. > **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -264,4 +264,4 @@ GraphQL schemas generally define root types for `queries`, `subscriptions` and ` All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. From 103be9701d7e53aa06866a927b2e13b6efeb794a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:00 -0500 Subject: [PATCH 368/432] New translations matchstick.mdx (Chinese Simplified) --- pages/zh/developer/matchstick.mdx | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/pages/zh/developer/matchstick.mdx b/pages/zh/developer/matchstick.mdx index 3cf1ec761bb9..882fe20cb392 100644 --- a/pages/zh/developer/matchstick.mdx +++ b/pages/zh/developer/matchstick.mdx @@ -4,7 +4,7 @@ title: Unit Testing Framework Matchstick is a unit testing framework, developed by [LimeChain](https://limechain.tech/), that enables subgraph developers to test their mapping logic in a sandboxed environment and deploy their subgraphs with confidence! -Follow the [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) to install. Now, you can move on to writing your first unit test. +Follow the [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) to install. Now, you can move on to writing your first unit test. Now, you can move on to writing your first unit test. ## Write a Unit Test @@ -61,7 +61,7 @@ export function createNewGravatarEvent( } ``` -We first have to create a test file in our project. We have chosen the name `gravity.test.ts`. In the newly created file we need to define a function named `runTests()`. It is important that the function has that exact name. This is an example of how our tests might look like: +We first have to create a test file in our project. We have chosen the name `gravity.test.ts`. We first have to create a test file in our project. We have chosen the name `gravity.test.ts`. In the newly created file we need to define a function named `runTests()`. It is important that the function has that exact name. This is an example of how our tests might look like: It is important that the function has that exact name. This is an example of how our tests might look like: ```typescript import { clearStore, test, assert } from 'matchstick-as/assembly/index' @@ -92,20 +92,22 @@ export function runTests(): void { test('Next test', () => { //... }) +} + }) } ``` -That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: +That's a lot to unpack! That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: - We're setting up our initial state and adding one custom Gravatar entity; - We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; - We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; -- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; -- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. +- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; +- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. We can define as many test blocks as we want. -There we go - we've created our first test! 👏 +There we go - we've created our first test! 👏 👏 -❗ **IMPORTANT:** _In order for the tests to work, we need to export the `runTests()` function in our mappings file. It won't be used there, but the export statement has to be there so that it can get picked up by Rust later when running the tests._ +❗ **IMPORTANT:** _In order for the tests to work, we need to export the `runTests()` function in our mappings file. It won't be used there, but the export statement has to be there so that it can get picked up by Rust later when running the tests._ It won't be used there, but the export statement has to be there so that it can get picked up by Rust later when running the tests. You can export the tests wrapper function in your mappings file like this: @@ -113,12 +115,13 @@ You can export the tests wrapper function in your mappings file like this: export { runTests } from "../tests/gravity.test.ts"; ``` -❗ **IMPORTANT:** _Currently there's an issue with using Matchstick when deploying your subgraph. Please only use Matchstick for local testing, and remove/comment out this line (`export { runTests } from "../tests/gravity.test.ts"`) once you're done. We expect to resolve this issue shortly, sorry for the inconvenience!_ +❗ **IMPORTANT:** _Currently there's an issue with using Matchstick when deploying your subgraph. Please only use Matchstick for local testing, and remove/comment out this line (`export { runTests } from "../tests/gravity.test.ts"`) once you're done. We expect to resolve this issue shortly, sorry for the inconvenience!_ Please only use Matchstick for local testing, and remove/comment out this line (`export { runTests } from "../tests/gravity.test.ts"`) once you're done. We expect to resolve this issue shortly, sorry for the inconvenience! _If you don't remove that line, you will get the following error message when attempting to deploy your subgraph:_ ``` /... +/... Mapping terminated before handling trigger: oneshot canceled .../ ``` @@ -135,7 +138,7 @@ And if all goes well you should be greeted with the following: ### Hydrating the store with a certain state -Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: +Users are able to hydrate the store with a known set of entities. Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: ```typescript let gravatar = new Gravatar('entryId') @@ -215,7 +218,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri ### Asserting the state of the store -Users are able to assert the final (or midway) state of the store through asserting entities. In order to do this, the user has to supply an Entity type, the specific ID of an Entity, a name of a field on that Entity, and the expected value of the field. Here's a quick example: +Users are able to assert the final (or midway) state of the store through asserting entities. In order to do this, the user has to supply an Entity type, the specific ID of an Entity, a name of a field on that Entity, and the expected value of the field. Here's a quick example: In order to do this, the user has to supply an Entity type, the specific ID of an Entity, a name of a field on that Entity, and the expected value of the field. Here's a quick example: ```typescript import { assert } from 'matchstick-as/assembly/index' @@ -227,11 +230,11 @@ gravatar.save() assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') ``` -Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. +Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. ### Interacting with Event metadata -Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: +Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: The following example shows how you can read/write to those fields on the Event object: ```typescript // Read @@ -250,7 +253,7 @@ assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hel ### Asserting that an Entity is **not** in the store -Users can assert that an entity does not exist in the store. The function takes an entity type and an id. If the entity is in fact in the store, the test will fail with a relevant error message. Here's a quick example of how to use this functionality: +Users can assert that an entity does not exist in the store. The function takes an entity type and an id. Users can assert that an entity does not exist in the store. The function takes an entity type and an id. If the entity is in fact in the store, the test will fail with a relevant error message. Here's a quick example of how to use this functionality: Here's a quick example of how to use this functionality: ```typescript assert.notInStore('Gravatar', '23') @@ -258,7 +261,7 @@ assert.notInStore('Gravatar', '23') ### Test run time duration in the log output -The log output includes the test run duration. Here's an example: +The log output includes the test run duration. Here's an example: Here's an example: `Jul 09 14:54:42.420 INFO Program execution time: 10.06022ms` From e7db7eded98da599f19de8ddcf1243e926c11734 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:06 -0500 Subject: [PATCH 369/432] New translations publish-subgraph.mdx (Chinese Simplified) --- pages/zh/developer/publish-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/zh/developer/publish-subgraph.mdx b/pages/zh/developer/publish-subgraph.mdx index 2f35f5eb1bae..82af8d228093 100644 --- a/pages/zh/developer/publish-subgraph.mdx +++ b/pages/zh/developer/publish-subgraph.mdx @@ -14,7 +14,7 @@ The decentralized network currently supports both Rinkeby and Ethereum Mainnet. ### Publishing a subgraph -Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). +Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). - Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. @@ -24,4 +24,4 @@ Subgraphs can be published to the decentralized network directly from the Subgra ### Updating metadata for a published subgraph -Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. +Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. From 91a7e307bae7a800ba7ac4235d6289b8b311bc60 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:10 -0500 Subject: [PATCH 370/432] New translations migrating-subgraph.mdx (Chinese Simplified) --- .../zh/hosted-service/migrating-subgraph.mdx | 34 +++++++++---------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/pages/zh/hosted-service/migrating-subgraph.mdx b/pages/zh/hosted-service/migrating-subgraph.mdx index 979d684faeed..16359fcb2dea 100644 --- a/pages/zh/hosted-service/migrating-subgraph.mdx +++ b/pages/zh/hosted-service/migrating-subgraph.mdx @@ -6,7 +6,7 @@ title: Migrating an Existing Subgraph to The Graph Network This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. -This will tell you everything you need to know about how to migrate to the decentralized network and manage your subgraphs moving forward. The process is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. +This will tell you everything you need to know about how to migrate to the decentralized network and manage your subgraphs moving forward. The process is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. The process is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. ### Migrating An Existing Subgraph to The Graph Network @@ -20,7 +20,7 @@ npm install -g @graphprotocol/graph-cli yarn global add @graphprotocol/graph-cli ``` -2. Create a subgraph on the [Subgraph Studio](https://thegraph.com/studio/). Guides on how to do that can be found in the [Subgraph Studio docs](/studio/subgraph-studio) and in [this video tutorial](https://www.youtube.com/watch?v=HfDgC2oNnwo). +2. Create a subgraph on the [Subgraph Studio](https://thegraph.com/studio/). Create a subgraph on the [Subgraph Studio](https://thegraph.com/studio/). Guides on how to do that can be found in the [Subgraph Studio docs](/studio/subgraph-studio) and in [this video tutorial](https://www.youtube.com/watch?v=HfDgC2oNnwo). 3. Inside the main project subgraph repository, authenticate the subgraph to deploy and build on the studio: ```sh @@ -33,13 +33,13 @@ graph auth --studio graph codegen && graph build ``` -5. Deploy the subgraph to the Studio. You can find your `` in the Studio UI, which is based on the name of your subgraph. +5. Deploy the subgraph to the Studio. Deploy the subgraph to the Studio. You can find your `` in the Studio UI, which is based on the name of your subgraph. ```sh graph deploy --studio ``` -6. Test queries on the Studio's playground. Here are some examples for the [Sushi - Mainnet Exchange Subgraph](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&view=Playground): +6. Test queries on the Studio's playground. Test queries on the Studio's playground. Here are some examples for the [Sushi - Mainnet Exchange Subgraph](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&view=Playground): ```sh { @@ -56,19 +56,19 @@ graph codegen && graph build } ``` -7. Fill in the description and the details of your subgraph and choose up to 3 categories. Upload a project image in the Studio if you'd like as well. +7. Fill in the description and the details of your subgraph and choose up to 3 categories. Upload a project image in the Studio if you'd like as well. Upload a project image in the Studio if you'd like as well. 8. Publish the subgraph on The Graph's Network by hitting the "Publish" button. -- Remember that publishing is an on-chain action and will require gas to be paid for in Ethereum - see an example transaction [here](https://etherscan.io/tx/0xd0c3fa0bc035703c9ba1ce40c1862559b9c5b6ea1198b3320871d535aa0de87b). Prices are roughly around 0.0425 ETH at 100 gwei. +- Remember that publishing is an on-chain action and will require gas to be paid for in Ethereum - see an example transaction [here](https://etherscan.io/tx/0xd0c3fa0bc035703c9ba1ce40c1862559b9c5b6ea1198b3320871d535aa0de87b). Prices are roughly around 0.0425 ETH at 100 gwei. Prices are roughly around 0.0425 ETH at 100 gwei. - Any time you need to upgrade your subgraph, you will be charged an upgrade fee. Remember, upgrading is just publishing another version of your existing subgraph on-chain. Because this incurs a cost, it is highly recommended to deploy and test your subgraph on Rinkeby before deploying to mainnet. It can, in some cases, also require some GRT if there is no signal on that subgraph. In the case there is signal/curation on that subgraph version (using auto-migrate), the taxes will be split. -And that's it! After you are done publishing, you'll be able to view your subgraphs live on the network via [The Graph Explorer](https://thegraph.com/explorer). +And that's it! And that's it! After you are done publishing, you'll be able to view your subgraphs live on the network via [The Graph Explorer](https://thegraph.com/explorer). ### Upgrading a Subgraph on the Network If you would like to upgrade an existing subgraph on the network, you can do this by deploying a new version of your subgraph to the Subgraph Studio using the Graph CLI. -1. Make changes to your current subgraph. A good idea is to test small fixes on the Subgraph Studio by publishing to Rinkeby. +1. Make changes to your current subgraph. Make changes to your current subgraph. A good idea is to test small fixes on the Subgraph Studio by publishing to Rinkeby. 2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): ```sh @@ -76,17 +76,17 @@ graph deploy --studio ``` 3. Test the new version in the Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). +4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). Remember that this requires gas (as described in the section above). ### Owner Upgrade Fee: Deep Dive -An upgrade requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every upgrade, a new bonding curve will be created (more on bonding curves [here](/curating#bonding-curve-101)). +An upgrade requires GRT to be migrated from the old version of the subgraph to the new version. An upgrade requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every upgrade, a new bonding curve will be created (more on bonding curves [here](/curating#bonding-curve-101)). The new bonding curve charges the 2.5% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this, or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive upgrade calls. The example below is only the case if your subgraph is being actively curated on. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal yourself on your own subgraph. - 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner upgrades to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the upgrade, otherwise the upgrade will not succeed. This happens in the same transaction as the upgrade. +- Owner upgrades to v2. Owner upgrades to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned +- The owner then has 1250 GRT burned to pay for half the fee. The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the upgrade, otherwise the upgrade will not succeed. This happens in the same transaction as the upgrade. This happens in the same transaction as the upgrade. _While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of upgrades for subgraph developers._ @@ -94,13 +94,13 @@ _While this mechanism is currently live on the network, the community is current If you're making a lot of changes to your subgraph, it is not a good idea to continually upgrade it and front the upgrade costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective, but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an upgrade so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/8tgJ7rKW) on Discord to let Indexers know when you're versioning your subgraphs. -Subgraphs are open API that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. +Subgraphs are open API that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. ### Updating the Metadata of a Subgraph -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields. +You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields. -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. +Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. If this is checked, an an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. ## Best Practices for Deploying a Subgraph to The Graph Network @@ -119,7 +119,7 @@ Follow the steps [here](/developer/deprecating-a-subgraph) to deprecate your sub The Hosted Service was set up to allow developers to deploy their subgraphs without any restrictions. -In order for The Graph Network to truly be decentralized, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/studio/billing). +In order for The Graph Network to truly be decentralized, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/studio/billing). For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/studio/billing). ### Estimate Query Fees on the Network @@ -133,7 +133,7 @@ Remember that it's a dynamic and growing market, but how you interact with it is ## Additional Resources -If you're still confused, fear not! Check out the following resources or watch our video guide on migrating subgraphs to the decentralized network below: +If you're still confused, fear not! If you're still confused, fear not! Check out the following resources or watch our video guide on migrating subgraphs to the decentralized network below:
-Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. +Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. -You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! +You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! -Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. +Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. -**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! +**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! This is usually because the subgraph has bugs - the logs will tell you where those issues exist! ## Versioning your Subgraph with the CLI -Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. +Developers might want to update their subgraph, for a variety of reasons. Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. -Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. +Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. Please note that there are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, developers must also fund a part of the curation tax on auto-migrating signal. You cannot publish a new version of your subgraph if curators have not signaled on it. For more information on the risks of curation, please read more [here](/curating). ### Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. +Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) ## Managing your API Keys -Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. +Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. The Studio will list out existing API keys, which will give you the ability to manage or delete them. @@ -110,13 +110,13 @@ The Studio will list out existing API keys, which will give you the ability to m - View the current usage of the API key with stats: - Number of queries - Amount of GRT spent -2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: +2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: In this section, you can: - View and manage the domain names authorized to use your API key - Assign subgraphs that can be queried with your API key ## How to Manage your Subgraph -API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. +API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. You can organize your subgraphs by their **status** and **category**. - The **Status** tag allows you to pick between a variety of tags including ``, ``, ``, ``, etc. -- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. +- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. Options include ``, ``, ``, etc. From 70dcbad1dfd0e524d81be0791f2bd4b7ceba75d7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:30 -0500 Subject: [PATCH 375/432] New translations near.mdx (Chinese Simplified) --- pages/zh/supported-networks/near.mdx | 50 ++++++++++++++-------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/pages/zh/supported-networks/near.mdx b/pages/zh/supported-networks/near.mdx index e5980fba4e95..162420eae81f 100644 --- a/pages/zh/supported-networks/near.mdx +++ b/pages/zh/supported-networks/near.mdx @@ -8,20 +8,20 @@ title: 在 NEAR 上构建子图 ## NEAR是什么? -[NEAR](https://near.org/) 是一个用于构建去中心化应用程序的智能合约平台。 请访问 [官方文档](https://docs.near.org/docs/concepts/new-to-near) 了解更多信息。 +[NEAR](https://near.org/) is a smart contract platform for building decentralised applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. 请访问 [官方文档](https://docs.near.org/docs/concepts/new-to-near) 了解更多信息。 ## NEAR子图是什么? -Graph 为开发人员提供了一种被称为子图的工具,利用这个工具,开发人员能够处理区块链事件,并通过 GraphQL API提供结果数据。 [Graph节点](https://github.com/graphprotocol/graph-node)现在能够处理 NEAR 事件,这意味着 NEAR 开发人员现在可以构建子图来索引他们的智能合约。 +Graph 为开发人员提供了一种被称为子图的工具,利用这个工具,开发人员能够处理区块链事件,并通过 GraphQL API提供结果数据。 The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. -子图是基于事件的,这意味着子图可以侦听并处理链上事件。 NEAR 子图目前支持两种类型的处理程序: +子图是基于事件的,这意味着子图可以侦听并处理链上事件。 Subgraphs are event-based, which means that they listen for and then process on-chain events. There are currently two types of handlers supported for NEAR subgraphs: - 区块处理器: 这些处理程序在每个新区块上运行 - 收据处理器: 每次在指定帐户上一个消息被执行时运行。 [NEAR 文档中](https://docs.near.org/docs/concepts/transaction#receipt): -> Receipt是系统中唯一可操作的对象。 当我们在 NEAR 平台上谈论“处理交易”时,这最终意味着在某个时候“应用收据”。 +> Receipt是系统中唯一可操作的对象。 A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. ## 构建NEAR子图 @@ -35,11 +35,11 @@ NEAR子图开发需要`0.23.0`以上版本的`graph-cli`,以及 `0.23.0`以上 子图定义包括三个方面: -**subgraph.yaml:** 子图清单,定义感兴趣的数据源以及如何处理它们。 NEAR 是一种全新`类型`数据源。 +**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. NEAR 是一种全新`类型`数据源。 -**schema.graphql:** 一个模式文件,它定义为您的子图存储哪些数据,以及如何通过 GraphQL 查询它。 NEAR 子图的要求包含在 [现有文档](/developer/create-subgraph-hosted#the-graphql-schema)中。 +**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). NEAR 子图的要求包含在 [现有文档](/developer/create-subgraph-hosted#the-graphql-schema)中。 -**AssemblyScript 映射:**将事件数据转换为模式文件中定义的实体的[AssemblyScript 代码](/developer/assemblyscript-api)。 NEAR 支持引入了 NEAR 特定的数据类型和新的JSON 解析功能。 +**AssemblyScript Mappings:** [AssemblyScript code](/developer/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. NEAR 支持引入了 NEAR 特定的数据类型和新的JSON 解析功能。 在子图开发过程中,有两个关键命令: @@ -50,7 +50,7 @@ $ graph build # 从 AssemblyScript 文件生成 Web Assembly,并在 /build 文 ### 子图清单定义 -子图清单(`subgraph.yaml`)标识子图的数据源、感兴趣的触发器以及响应这些触发器而运行的函数。 以下是一个NEAR 的子图清单的例子: +The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:: 以下是一个NEAR 的子图清单的例子: ```yaml specVersion: 0.0.2 @@ -73,17 +73,17 @@ dataSources: ``` - NEAR subgraphs introduce a new `kind` of data source (`near`) -- The `network` should correspond to a network on the hosting Graph Node. On the Hosted Service, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human readable ID corresponding to a [NEAR account](https://docs.near.org/docs/concepts/account). This can be an account, or a sub account. +- The `network` should correspond to a network on the hosting Graph Node. The `network` should correspond to a network on the hosting Graph Node. On the Hosted Service, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human readable ID corresponding to a [NEAR account](https://docs.near.org/docs/concepts/account). This can be an account, or a sub account. This can be an account, or a sub account. NEAR data sources support two types of handlers: -- `blockHandlers`: run on every new NEAR block. No `source.account` is required. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). ### Schema Definition -Schema definition describes the structure of the resulting subgraph database, and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). +Schema definition describes the structure of the resulting subgraph database, and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). ### AssemblyScript Mappings @@ -158,11 +158,11 @@ These types are passed to block & receipt handlers: Otherwise the rest of the [AssemblyScript API](/developer/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developer/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developer/assemblyscript-api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). The Graph's Hosted Service currently supports indexing NEAR mainnet and testnet in beta, with the following network names: @@ -171,7 +171,7 @@ The Graph's Hosted Service currently supports indexing NEAR mainnet and testnet More information on creating and deploying subgraphs on the Hosted Service can be found [here](/hosted-service/deploy-subgraph-hosted). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On the Hosted Service, this can be done from [your Dashboard](https://thegraph.com/hosted-service/dashboard): "Add Subgraph". +As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On the Hosted Service, this can be done from [your Dashboard](https://thegraph.com/hosted-service/dashboard): "Add Subgraph". Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: @@ -194,7 +194,7 @@ graph deploy --node https://api.thegraph.com/deploy/ --ipfs https://api.thegraph graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: +Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: You can check its progress by querying the subgraph itself: ``` { @@ -216,7 +216,7 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. +The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. ## Example Subgraphs @@ -230,7 +230,7 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! ### Can a subgraph index both NEAR and EVM chains? @@ -238,27 +238,27 @@ No, a subgraph can only support data sources from one chain / network. ### Can subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. ### Will receipt handlers trigger for accounts and their sub accounts? -Receipt handlers will only be triggered for the exact-match of the named account. More flexibility may be added in future. +Receipt handlers will only be triggered for the exact-match of the named account. More flexibility may be added in future. More flexibility may be added in future. ### Can NEAR subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +This is not supported. This is not supported. We are evaluating whether this functionality is required for indexing. ### Can I use data source templates in my NEAR subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +This is not currently supported. This is not supported. We are evaluating whether this functionality is required for indexing. ### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? -Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR subgraphs. Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. ### My question hasn't been answered, where can I get more help building NEAR subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/developer/quick-start). Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. +If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/developer/quick-start). Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. ## References From 61fd0be60d93b8209139098a5b7a009a9f79138e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:34 -0500 Subject: [PATCH 376/432] New translations what-is-hosted-service.mdx (Chinese Simplified) --- .../hosted-service/what-is-hosted-service.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/hosted-service/what-is-hosted-service.mdx b/pages/zh/hosted-service/what-is-hosted-service.mdx index 24d7068c1b44..7c04db99dd7a 100644 --- a/pages/zh/hosted-service/what-is-hosted-service.mdx +++ b/pages/zh/hosted-service/what-is-hosted-service.mdx @@ -2,19 +2,19 @@ title: 什么是托管服务? --- -本节将引导您将子图部署到 [托管服务](https://thegraph.com/hosted-service/) 提醒一下,托管服务不会很快关闭。 一旦去中心化网络达到托管服务相当的功能,我们将逐步取消托管服务。 您在托管服务上部署的子图在[此处](https://thegraph.com/hosted-service/)仍然可用。 +This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) 一旦去中心化网络达到托管服务相当的功能,我们将逐步取消托管服务。 您在托管服务上部署的子图在[此处](https://thegraph.com/hosted-service/)仍然可用。 -If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. +If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. ## Create a Subgraph -First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` +First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` Create a subgraph by passing in `graph init --product hosted service` ### From an Existing Contract If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from this contract can be a good way to get started on the Hosted Service. -You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). +You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). ```sh graph init \ @@ -23,28 +23,28 @@ graph init \ / [] ``` -Additionally, you can use the following optional arguments. If the ABI cannot be fetched from Etherscan, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form. +Additionally, you can use the following optional arguments. Additionally, you can use the following optional arguments. If the ABI cannot be fetched from Etherscan, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form. If any optional arguments are missing from the command, it takes you through an interactive form. ```sh --network \ --abi \ ``` -The `` in this case is your github user or organization name, `` is the name for your subgraph, and `` is the optional name of the directory where graph init will put the example subgraph manifest. The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** +The `` in this case is your github user or organization name, `` is the name for your subgraph, and `` is the optional name of the directory where graph init will put the example subgraph manifest. The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** ### From an Example Subgraph -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: The following command does this: ``` graph init --from-example --product hosted-service / [] ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. ## 托管服务支持的网络 -请注意托管服务支持以下网络。 [Graph Explorer](https://thegraph.com/explorer)目前不支持以太坊主网(“主网”)之外的网络。 +请注意托管服务支持以下网络。 Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) - `mainnet` - `kovan` From 8c154c56bc1156603821c000ec997cfc3801dff9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:38 -0500 Subject: [PATCH 377/432] New translations query-hosted-service.mdx (Chinese Simplified) --- pages/zh/hosted-service/query-hosted-service.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/hosted-service/query-hosted-service.mdx b/pages/zh/hosted-service/query-hosted-service.mdx index ad41c4bede90..3bfd32ad34f7 100644 --- a/pages/zh/hosted-service/query-hosted-service.mdx +++ b/pages/zh/hosted-service/query-hosted-service.mdx @@ -8,7 +8,7 @@ title: 查询托管服务 #### 示例 -此查询列出了我们的映射创建的所有计数器。 由于我们只创建一个,结果将只包含我们的一个 `默认计数器`: +此查询列出了我们的映射创建的所有计数器。 This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { From 945ac8ae3629e4c70d0ad888b787f32ceeac0e2a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:40 -0500 Subject: [PATCH 378/432] New translations what-is-hosted-service.mdx (Spanish) --- pages/es/hosted-service/what-is-hosted-service.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/es/hosted-service/what-is-hosted-service.mdx b/pages/es/hosted-service/what-is-hosted-service.mdx index 15894df53eda..03b41d6578b5 100644 --- a/pages/es/hosted-service/what-is-hosted-service.mdx +++ b/pages/es/hosted-service/what-is-hosted-service.mdx @@ -32,7 +32,7 @@ Además, puedes utilizar los siguientes argumentos opcionales. Si la ABI no pued El ``en este caso es tu nombre de usuario u organización de github, `` es el nombre para tu subgrafo, y `` es el nombre opcional del directorio donde graph init pondrá el manifiesto del subgrafo de ejemplo. El `` es la dirección de tu contrato existente. `` es el nombre de la red Ethereum en la que está activo el contrato. `` es una ruta local a un archivo ABI del contrato. **Tanto --network como --abi son opcionales** -### From an Example Subgraph +### De un Subgrafo de Ejemplo El segundo modo que admite `graph init` es la creación de un nuevo proyecto a partir de un subgrafo de ejemplo. El siguiente comando lo hace: @@ -40,11 +40,11 @@ El segundo modo que admite `graph init` es la creación de un nuevo proyecto a p graph init --from-example --product hosted-service / [] ``` -El subgrafo de ejemplo se basa en el contrato Gravity de Dani Grant que gestiona los avatares de los usuarios y emite `NewGravatar` o `UpdateGravatar` cada vez que se crean o actualizan los avatares. El subgrafo maneja estos eventos escribiendo entidades `Gravatar` en el almacén de Graph Node y asegurándose de que éstas se actualicen según los eventos. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. +El subgrafo de ejemplo se basa en el contrato Gravity de Dani Grant que gestiona los avatares de los usuarios y emite `NewGravatar` o `UpdateGravatar` cada vez que se crean o actualizan los avatares. El subgrafo maneja estos eventos escribiendo entidades `Gravatar` en el almacén de the Graph Node y asegurándose de que éstas se actualicen según los eventos. Continúa con el [manifiesto del subgrafo](/developer/create-subgraph-hosted#the-subgraph-manifest) para entender mejor a qué eventos de tus contratos inteligentes hay que prestar atención, los mapeos y mucho más. -## Supported Networks on the Hosted Service +## Redes Admitidas en el Servicio Alojado -Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) +Ten en cuenta que las siguientes redes son admitidas en el Servicio Alojado. Las redes fuera de la red principal de Ethereum ('mainnet') no son actualmente admitidas en [The Graph Explorer.](https://thegraph.com/explorer) - `mainnet` - `kovan` From 3cab5fdeb06c0093af021eb9eff63c45855cad4f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:45 -0500 Subject: [PATCH 379/432] New translations deploy-subgraph-studio.mdx (Chinese Simplified) --- pages/zh/studio/deploy-subgraph-studio.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/zh/studio/deploy-subgraph-studio.mdx b/pages/zh/studio/deploy-subgraph-studio.mdx index 62f614ab7d15..140c591633b5 100644 --- a/pages/zh/studio/deploy-subgraph-studio.mdx +++ b/pages/zh/studio/deploy-subgraph-studio.mdx @@ -2,7 +2,7 @@ title: 将一个子图部署到子图工作室 --- -将一个子图部署到子图工作室是非常简单的。 你可以通过以下步骤完成: +Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: 你可以通过以下步骤完成: - 安装Graph CLI(同时使用yarn和npm)。 - 在子图工作室中创建你的子图 @@ -11,7 +11,7 @@ title: 将一个子图部署到子图工作室 ## 安装Graph CLI -我们使用相同的CLI将子图部署到我们的 [托管服务](https://thegraph.com/hosted-service/) 和[Subgraph Studio](https://thegraph.com/studio/)中。 以下是安装graph-cli的命令。 这可以用npm或yarn来完成。 +We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. 以下是安装graph-cli的命令。 这可以用npm或yarn来完成。 **用yarn安装:** @@ -27,7 +27,7 @@ npm install -g @graphprotocol/graph-cli ## 在子图工作室中创建你的子图 -在部署你的实际子图之前,你需要在 [子图工作室](https://thegraph.com/studio/)中创建一个子图。 我们建议你阅读我们的[Studio文档](/studio/subgraph-studio)以了解更多这方面的信息。 +Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. 我们建议你阅读我们的[Studio文档](/studio/subgraph-studio)以了解更多这方面的信息。 ## 初始化你的子图 @@ -41,11 +41,11 @@ graph init --studio ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -运行`graph init`后,你会被要求输入你想查询的合同地址、网络和abi。 这样做将在你的本地机器上生成一个新的文件夹,里面有一些基本代码,可以开始在你的子图上工作。 然后,你可以最终确定你的子图,以确保它按预期工作。 +After running `graph init`, you will be asked to input the contract address, network and abi that you want to query. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. 这样做将在你的本地机器上生成一个新的文件夹,里面有一些基本代码,可以开始在你的子图上工作。 然后,你可以最终确定你的子图,以确保它按预期工作。 ## Graph 认证 -在能够将你的子图部署到子图工作室之前,你需要在CLI中登录到你的账户。 要做到这一点,你将需要你的部署密钥,你可以在你的 "我的子图 "页面或子图的详细信息页面上找到。 +Before being able to deploy your subgraph to Subgraph Studio, you need to login to your account within the CLI. To do this, you will need your deploy key that you can find on your "My Subgraphs" page or on your subgraph details page. 要做到这一点,你将需要你的部署密钥,你可以在你的 "我的子图 "页面或子图的详细信息页面上找到。 以下是你需要使用的命令,以从CLI进行认证: @@ -55,7 +55,7 @@ graph auth --studio ## 将一个子图部署到子图工作室 -一旦你准备好了,你可以将你的子图部署到子图工作室。 这样做不会将你的子图发布到去中心化的网络中,它只会将它部署到你的Studio账户中,在那里你将能够测试它并更新元数据。 +一旦你准备好了,你可以将你的子图部署到子图工作室。 Once you are ready, you can deploy your subgraph to Subgraph Studio. Doing this won't publish your subgraph to the decentralized network, it will only deploy it to your Studio account where you will be able to test it and update the metadata. 这里是你需要使用的CLI命令,以部署你的子图。 @@ -63,6 +63,6 @@ graph auth --studio graph deploy --studio ``` -运行这个命令后,CLI会要求提供一个版本标签,你可以随意命名,你可以使用 `0.1`和 `0.2`这样的标签,或者也可以使用字母,如 `uniswap-v2-0.1` . 这些标签将在Graph Explorer中可见,并可由策展人用来决定是否要在这个版本上发出信号,所以要明智地选择它们。 +After running this command, the CLI will ask for a version label, you can name it however you want, you can use labels such as `0.1` and `0.2` or use letters as well such as `uniswap-v2-0.1` . Those labels will be visible in Graph Explorer and can be used by curators to decide if they want to signal on this version or not, so choose them wisely. 这些标签将在Graph Explorer中可见,并可由策展人用来决定是否要在这个版本上发出信号,所以要明智地选择它们。 一旦部署完毕,你可以在子图工作室中使用控制面板测试你的子图,如果需要的话,可以部署另一个版本,更新元数据,当你准备好后,将你的子图发布到Graph Explorer。 From 1fdd22b50652109cc6df7644e1f3be53c7470a5d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:51 -0500 Subject: [PATCH 380/432] New translations billing.mdx (Chinese Simplified) --- pages/zh/studio/billing.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/zh/studio/billing.mdx b/pages/zh/studio/billing.mdx index ce99acd65775..48332e8bc19a 100644 --- a/pages/zh/studio/billing.mdx +++ b/pages/zh/studio/billing.mdx @@ -16,13 +16,13 @@ title: 子图工作室的计费 2. 将GRT和ETH发送到你的钱包里 3. 使用用户界面桥接GRT到Polygon - a) 在你向Polygon桥发送任何数量的GRT后,你将在几分钟内收到0.001 Matic。 你可以在搜索栏中输入你的地址,在 [Polygonscan](https://polygonscan.com/)上跟踪交易情况。 + a) You will receive 0.001 Matic in a few minutes after you send any amount of GRT to the Polygon bridge. You can track the transaction on [Polygonscan](https://polygonscan.com/) by inputting your address into the search bar. 你可以在搜索栏中输入你的地址,在 [Polygonscan](https://polygonscan.com/)上跟踪交易情况。 -4. 在Polygon的计费合同中加入桥接的GRT。 计费合同地址是:[0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). +4. Add bridged GRT to the billing contract on Polygon. The billing contract address is: [0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). 计费合同地址是:[0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). - a) 为了完成第4步,你需要将钱包中的网络切换到Polygon。 你可以通过连接你的钱包并点击[这里](https://chainlist.org/) 的 "选择Matic(Polygon)主网 "来添加Polygon的网络。一旦你添加了网络,在你的钱包里通过导航到右上角的网络图标来切换它。 在Metamask中,该网络被称为 **Matic Mainnnet.** + a) 为了完成第4步,你需要将钱包中的网络切换到Polygon。 a) In order to complete step #4, you'll need to switch your network in your wallet to Polygon. You can add Polygon's network by connecting your wallet and clicking on "Choose Matic (Polygon) Mainnet" [here.](https://chainlist.org/) Once you've added the network, switch it over in your wallet by navigating to the network pill on the top right hand side corner. In Metamask, the network is called **Matic Mainnnet.** 在Metamask中,该网络被称为 **Matic Mainnnet.** -在每个周末,如果你使用了你的API密钥,你将会收到一张基于你在这期间产生的查询费用的发票。 这张发票将用你余额中的GRT来支付。 查询量是由你拥有的API密钥来评估的。 你的余额将在费用提取后被更新。 +At the end of each week, if you used your API keys, you will receive an invoice based on the query fees you have generated during this period. This invoice will be paid using GRT available in your balance. Query volume is evaluated by the API keys you own. Your balance will be updated after fees are withdrawn. 这张发票将用你余额中的GRT来支付。 查询量是由你拥有的API密钥来评估的。 你的余额将在费用提取后被更新。 #### 下面是你如何进行开票的过程: @@ -51,7 +51,7 @@ title: 子图工作室的计费 ### 多重签名用户 -多重合约是只能存在于它们所创建的网络上的智能合约,所以如果你在以太坊主网上创建了一个--它将只存在于主网上。 由于我们的账单使用Polygon,如果你将GRT桥接到Polygon的多符号地址上,资金就会丢失。 +Multisigs are smart-contracts that can exist only on the network they have been created, so if you created one on Ethereum Mainnet - it will only exist on Mainnet. Since our billing uses Polygon, if you were to bridge GRT to the multisig address on Polygon the funds would be lost. 由于我们的账单使用Polygon,如果你将GRT桥接到Polygon的多符号地址上,资金就会丢失。 为了克服这个问题,我们创建了 [一个专门的工具](https://multisig-billing.thegraph.com/),它将帮助你用一个标准的钱包/EOA(一个由私钥控制的账户)在我们的计费合同上存入GRT(代表multisig)。 @@ -60,7 +60,7 @@ title: 子图工作室的计费 这个工具将指导你完成以下步骤: 1. 连接你的标准钱包/EOA(这个钱包需要拥有一些ETH以及你要存入的GRT)。 -2. 桥GRT到Polygon。 在交易完成后,你需要等待7-8分钟,以便最终完成桥梁转移。 +2. 桥GRT到Polygon。 Bridge GRT to Polygon. You will have to wait 7-8 minutes after the transaction is complete for the bridge transfer to be finalized. 3. 一旦你的GRT在你的Polygon余额中可用,你就可以把它们存入账单合同,同时在`Multisig地址栏` 中指定你要资助的multisig地址。 一旦存款交易得到确认,你就可以回到 [Subgraph Studio](https://thegraph.com/studio/),并与你的Gnosis Safe Multisig连接,以创建API密钥并使用它们来生成查询。 From 00cfb35b41f22d747ab296b20bf60c3bab4e00b3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:53 -0500 Subject: [PATCH 381/432] New translations deploy-subgraph-studio.mdx (Spanish) --- pages/es/studio/deploy-subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/studio/deploy-subgraph-studio.mdx b/pages/es/studio/deploy-subgraph-studio.mdx index 17e72d3dac07..72ca3decc35b 100644 --- a/pages/es/studio/deploy-subgraph-studio.mdx +++ b/pages/es/studio/deploy-subgraph-studio.mdx @@ -1,5 +1,5 @@ --- -title: Deploy a Subgraph to the Subgraph Studio +title: Despliegue de un subgrafo en Subgraph Studio --- Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: From 6777b9507bc6410dc05d312a4aee3c36949e99e9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 12:11:57 -0500 Subject: [PATCH 382/432] New translations curating.mdx (Spanish) --- pages/es/curating.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/curating.mdx b/pages/es/curating.mdx index 425cb5608b6f..ddc4e0125b20 100644 --- a/pages/es/curating.mdx +++ b/pages/es/curating.mdx @@ -91,7 +91,7 @@ Se sugiere que no actualices tus subgrafos con demasiada frecuencia. Consulta la Las participaciones de un curador no se pueden "comprar" o "vender" como otros tokens ERC20 con los que seguramente estás familiarizado. Solo pueden anclar (crearse) o quemarse (destruirse) a lo largo de la curva de vinculación de un subgrafo en particular. La cantidad de GRT necesaria para generar una nueva señal y la cantidad de GRT que recibes cuando quemas tu señal existente, está determinada por esa curva de vinculación. Como curador, debes saber que cuando quemas tus acciones de curación para retirar GRT, puedes terminar con más o incluso con menos GRT de los que depositaste en un inicio. -¿Sigues confundido? Te invitamos a echarle un vistazo a nuestra guía en un vídeo que aborda todo sobre la curación: +¿Sigues confundido? Still confused? Still confused? Check out our Curation video guide below:
-Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. +Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Rinkeby, the testnet we support. If you’re a first time subgraph developer, we highly suggest you start with publishing to Rinkeby, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements. -You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! +You’ll only be able to index data from mainnet (even if your subgraph was published to a testnet) because only subgraphs that are indexing mainnet data can be published to the network. This is because indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely! -Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. +Subgraphs with curation signal are shown to Indexers so that they can be indexed on the decentralized network. You can publish subgraphs and signal in one transaction, which allows you to mint the first curation signal on the subgraph and saves on gas costs. By adding your signal to the signal later provided by Curators, your subgraph will also have a higher chance of ultimately serving queries. -**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! This is usually because the subgraph has bugs - the logs will tell you where those issues exist! +**Now that you’ve published your subgraph, let’s get into how you’ll manage them on a regular basis.** Note that you cannot publish your subgraph to the network if it has failed syncing. This is usually because the subgraph has bugs - the logs will tell you where those issues exist! ## Versioning your Subgraph with the CLI -Developers might want to update their subgraph, for a variety of reasons. Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. +Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and indexers will be able to index this new version. -Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. +Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. Please note that there are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, developers must also fund a part of the curation tax on auto-migrating signal. You cannot publish a new version of your subgraph if curators have not signaled on it. For more information on the risks of curation, please read more [here](/curating). ### Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. +Whenever you deploy a new subgraph version in the Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) ## Managing your API Keys -Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. +Regardless of whether you’re a dapp developer or a subgraph developer, you’ll need to manage your API keys. This is important for you to be able to query subgraphs because API keys make sure the connections between application services are valid and authorized. This includes authenticating the end user and the device using the application. The Studio will list out existing API keys, which will give you the ability to manage or delete them. @@ -110,13 +110,13 @@ The Studio will list out existing API keys, which will give you the ability to m - View the current usage of the API key with stats: - Number of queries - Amount of GRT spent -2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: In this section, you can: +2. Under **Manage Security Settings**, you’ll be able to opt into security settings depending on the level of control you’d like to have over your API keys. In this section, you can: - View and manage the domain names authorized to use your API key - Assign subgraphs that can be queried with your API key ## How to Manage your Subgraph -API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. You can organize your subgraphs by their **status** and **category**. +API keys aside, you’ll have many tools at your disposal to manage your subgraphs. You can organize your subgraphs by their **status** and **category**. - The **Status** tag allows you to pick between a variety of tags including ``, ``, ``, ``, etc. -- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. Options include ``, ``, ``, etc. +- Meanwhile, **Category** allows you to designate what category your subgraph falls into. Options include ``, ``, ``, etc. From 6661ebad6ab27b0dc3f672975b7ee3aee0f84ab8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:07:40 -0500 Subject: [PATCH 416/432] New translations near.mdx (Chinese Simplified) --- pages/zh/supported-networks/near.mdx | 50 ++++++++++++++-------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/pages/zh/supported-networks/near.mdx b/pages/zh/supported-networks/near.mdx index 162420eae81f..e5980fba4e95 100644 --- a/pages/zh/supported-networks/near.mdx +++ b/pages/zh/supported-networks/near.mdx @@ -8,20 +8,20 @@ title: 在 NEAR 上构建子图 ## NEAR是什么? -[NEAR](https://near.org/) is a smart contract platform for building decentralised applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. 请访问 [官方文档](https://docs.near.org/docs/concepts/new-to-near) 了解更多信息。 +[NEAR](https://near.org/) 是一个用于构建去中心化应用程序的智能合约平台。 请访问 [官方文档](https://docs.near.org/docs/concepts/new-to-near) 了解更多信息。 ## NEAR子图是什么? -Graph 为开发人员提供了一种被称为子图的工具,利用这个工具,开发人员能够处理区块链事件,并通过 GraphQL API提供结果数据。 The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +Graph 为开发人员提供了一种被称为子图的工具,利用这个工具,开发人员能够处理区块链事件,并通过 GraphQL API提供结果数据。 [Graph节点](https://github.com/graphprotocol/graph-node)现在能够处理 NEAR 事件,这意味着 NEAR 开发人员现在可以构建子图来索引他们的智能合约。 -子图是基于事件的,这意味着子图可以侦听并处理链上事件。 Subgraphs are event-based, which means that they listen for and then process on-chain events. There are currently two types of handlers supported for NEAR subgraphs: +子图是基于事件的,这意味着子图可以侦听并处理链上事件。 NEAR 子图目前支持两种类型的处理程序: - 区块处理器: 这些处理程序在每个新区块上运行 - 收据处理器: 每次在指定帐户上一个消息被执行时运行。 [NEAR 文档中](https://docs.near.org/docs/concepts/transaction#receipt): -> Receipt是系统中唯一可操作的对象。 A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Receipt是系统中唯一可操作的对象。 当我们在 NEAR 平台上谈论“处理交易”时,这最终意味着在某个时候“应用收据”。 ## 构建NEAR子图 @@ -35,11 +35,11 @@ NEAR子图开发需要`0.23.0`以上版本的`graph-cli`,以及 `0.23.0`以上 子图定义包括三个方面: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. NEAR 是一种全新`类型`数据源。 +**subgraph.yaml:** 子图清单,定义感兴趣的数据源以及如何处理它们。 NEAR 是一种全新`类型`数据源。 -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). NEAR 子图的要求包含在 [现有文档](/developer/create-subgraph-hosted#the-graphql-schema)中。 +**schema.graphql:** 一个模式文件,它定义为您的子图存储哪些数据,以及如何通过 GraphQL 查询它。 NEAR 子图的要求包含在 [现有文档](/developer/create-subgraph-hosted#the-graphql-schema)中。 -**AssemblyScript Mappings:** [AssemblyScript code](/developer/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. NEAR 支持引入了 NEAR 特定的数据类型和新的JSON 解析功能。 +**AssemblyScript 映射:**将事件数据转换为模式文件中定义的实体的[AssemblyScript 代码](/developer/assemblyscript-api)。 NEAR 支持引入了 NEAR 特定的数据类型和新的JSON 解析功能。 在子图开发过程中,有两个关键命令: @@ -50,7 +50,7 @@ $ graph build # 从 AssemblyScript 文件生成 Web Assembly,并在 /build 文 ### 子图清单定义 -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:: 以下是一个NEAR 的子图清单的例子: +子图清单(`subgraph.yaml`)标识子图的数据源、感兴趣的触发器以及响应这些触发器而运行的函数。 以下是一个NEAR 的子图清单的例子: ```yaml specVersion: 0.0.2 @@ -73,17 +73,17 @@ dataSources: ``` - NEAR subgraphs introduce a new `kind` of data source (`near`) -- The `network` should correspond to a network on the hosting Graph Node. The `network` should correspond to a network on the hosting Graph Node. On the Hosted Service, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human readable ID corresponding to a [NEAR account](https://docs.near.org/docs/concepts/account). This can be an account, or a sub account. This can be an account, or a sub account. +- The `network` should correspond to a network on the hosting Graph Node. On the Hosted Service, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human readable ID corresponding to a [NEAR account](https://docs.near.org/docs/concepts/account). This can be an account, or a sub account. NEAR data sources support two types of handlers: -- `blockHandlers`: run on every new NEAR block. No `source.account` is required. No `source.account` is required. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) must be added as independent data sources). ### Schema Definition -Schema definition describes the structure of the resulting subgraph database, and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). +Schema definition describes the structure of the resulting subgraph database, and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developer/create-subgraph-hosted#the-graphql-schema). ### AssemblyScript Mappings @@ -158,11 +158,11 @@ These types are passed to block & receipt handlers: Otherwise the rest of the [AssemblyScript API](/developer/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developer/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developer/assemblyscript-api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). The Graph's Hosted Service currently supports indexing NEAR mainnet and testnet in beta, with the following network names: @@ -171,7 +171,7 @@ The Graph's Hosted Service currently supports indexing NEAR mainnet and testnet More information on creating and deploying subgraphs on the Hosted Service can be found [here](/hosted-service/deploy-subgraph-hosted). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On the Hosted Service, this can be done from [your Dashboard](https://thegraph.com/hosted-service/dashboard): "Add Subgraph". +As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On the Hosted Service, this can be done from [your Dashboard](https://thegraph.com/hosted-service/dashboard): "Add Subgraph". Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: @@ -194,7 +194,7 @@ graph deploy --node https://api.thegraph.com/deploy/ --ipfs https://api.thegraph graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: You can check its progress by querying the subgraph itself: +Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: ``` { @@ -216,7 +216,7 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. +The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/developer/graphql-api) for more information. ## Example Subgraphs @@ -230,7 +230,7 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! ### Can a subgraph index both NEAR and EVM chains? @@ -238,27 +238,27 @@ No, a subgraph can only support data sources from one chain / network. ### Can subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. ### Will receipt handlers trigger for accounts and their sub accounts? -Receipt handlers will only be triggered for the exact-match of the named account. More flexibility may be added in future. More flexibility may be added in future. +Receipt handlers will only be triggered for the exact-match of the named account. More flexibility may be added in future. ### Can NEAR subgraphs make view calls to NEAR accounts during mappings? -This is not supported. This is not supported. We are evaluating whether this functionality is required for indexing. +This is not supported. We are evaluating whether this functionality is required for indexing. ### Can I use data source templates in my NEAR subgraph? -This is not currently supported. This is not supported. We are evaluating whether this functionality is required for indexing. +This is not currently supported. We are evaluating whether this functionality is required for indexing. ### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? -Pending functionality is not yet supported for NEAR subgraphs. Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. ### My question hasn't been answered, where can I get more help building NEAR subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/developer/quick-start). Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. +If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/developer/quick-start). Otherwise please join [The Graph Protocol Discord](https://discord.gg/vtvv7FP) and ask in the #near channel, or email near@thegraph.com. ## References From 0cb479bf7c3218a3bf611f6dbc529a785f579443 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:07:44 -0500 Subject: [PATCH 417/432] New translations what-is-hosted-service.mdx (Chinese Simplified) --- .../hosted-service/what-is-hosted-service.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/hosted-service/what-is-hosted-service.mdx b/pages/zh/hosted-service/what-is-hosted-service.mdx index 7c04db99dd7a..24d7068c1b44 100644 --- a/pages/zh/hosted-service/what-is-hosted-service.mdx +++ b/pages/zh/hosted-service/what-is-hosted-service.mdx @@ -2,19 +2,19 @@ title: 什么是托管服务? --- -This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) 一旦去中心化网络达到托管服务相当的功能,我们将逐步取消托管服务。 您在托管服务上部署的子图在[此处](https://thegraph.com/hosted-service/)仍然可用。 +本节将引导您将子图部署到 [托管服务](https://thegraph.com/hosted-service/) 提醒一下,托管服务不会很快关闭。 一旦去中心化网络达到托管服务相当的功能,我们将逐步取消托管服务。 您在托管服务上部署的子图在[此处](https://thegraph.com/hosted-service/)仍然可用。 -If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. +If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. ## Create a Subgraph -First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` Create a subgraph by passing in `graph init --product hosted service` +First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` ### From an Existing Contract If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from this contract can be a good way to get started on the Hosted Service. -You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). +You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). ```sh graph init \ @@ -23,28 +23,28 @@ graph init \ / [] ``` -Additionally, you can use the following optional arguments. Additionally, you can use the following optional arguments. If the ABI cannot be fetched from Etherscan, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form. If any optional arguments are missing from the command, it takes you through an interactive form. +Additionally, you can use the following optional arguments. If the ABI cannot be fetched from Etherscan, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form. ```sh --network \ --abi \ ``` -The `` in this case is your github user or organization name, `` is the name for your subgraph, and `` is the optional name of the directory where graph init will put the example subgraph manifest. The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** +The `` in this case is your github user or organization name, `` is the name for your subgraph, and `` is the optional name of the directory where graph init will put the example subgraph manifest. The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** ### From an Example Subgraph -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: The following command does this: +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: ``` graph init --from-example --product hosted-service / [] ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. ## 托管服务支持的网络 -请注意托管服务支持以下网络。 Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) +请注意托管服务支持以下网络。 [Graph Explorer](https://thegraph.com/explorer)目前不支持以太坊主网(“主网”)之外的网络。 - `mainnet` - `kovan` From 4f5e06b94ad1fa8fe1b659bde09064fc9d884223 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:07:48 -0500 Subject: [PATCH 418/432] New translations query-hosted-service.mdx (Chinese Simplified) --- pages/zh/hosted-service/query-hosted-service.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/hosted-service/query-hosted-service.mdx b/pages/zh/hosted-service/query-hosted-service.mdx index 3bfd32ad34f7..ad41c4bede90 100644 --- a/pages/zh/hosted-service/query-hosted-service.mdx +++ b/pages/zh/hosted-service/query-hosted-service.mdx @@ -8,7 +8,7 @@ title: 查询托管服务 #### 示例 -此查询列出了我们的映射创建的所有计数器。 This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +此查询列出了我们的映射创建的所有计数器。 由于我们只创建一个,结果将只包含我们的一个 `默认计数器`: ```graphql { From 2c2e376a6a5c9b0f891843fdbe20108ead05e942 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:07:51 -0500 Subject: [PATCH 419/432] New translations what-is-hosted-service.mdx (Arabic) --- pages/ar/hosted-service/what-is-hosted-service.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/hosted-service/what-is-hosted-service.mdx b/pages/ar/hosted-service/what-is-hosted-service.mdx index eafea9ab9935..491b79119f4f 100644 --- a/pages/ar/hosted-service/what-is-hosted-service.mdx +++ b/pages/ar/hosted-service/what-is-hosted-service.mdx @@ -2,7 +2,7 @@ title: What is the Hosted Service? --- -We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) +This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. From f22c4d75ef386bb246f59ef49fcce699b236efca Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:07:52 -0500 Subject: [PATCH 420/432] New translations what-is-hosted-service.mdx (Japanese) --- pages/ja/hosted-service/what-is-hosted-service.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/hosted-service/what-is-hosted-service.mdx b/pages/ja/hosted-service/what-is-hosted-service.mdx index 0aa6dafcb012..7f604c8dc31a 100644 --- a/pages/ja/hosted-service/what-is-hosted-service.mdx +++ b/pages/ja/hosted-service/what-is-hosted-service.mdx @@ -2,7 +2,7 @@ title: What is the Hosted Service? --- -We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) +This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. From 814702a85ed15669e53e73198a17150ae1d96aac Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:07:53 -0500 Subject: [PATCH 421/432] New translations what-is-hosted-service.mdx (Korean) --- pages/ko/hosted-service/what-is-hosted-service.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/hosted-service/what-is-hosted-service.mdx b/pages/ko/hosted-service/what-is-hosted-service.mdx index 0aa6dafcb012..7f604c8dc31a 100644 --- a/pages/ko/hosted-service/what-is-hosted-service.mdx +++ b/pages/ko/hosted-service/what-is-hosted-service.mdx @@ -2,7 +2,7 @@ title: What is the Hosted Service? --- -We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) +This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. From ce3777bd6e1a94d451cb4a55ca7fd3bb3a66fd37 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:07:54 -0500 Subject: [PATCH 422/432] New translations deploy-subgraph-studio.mdx (Chinese Simplified) --- pages/zh/studio/deploy-subgraph-studio.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/zh/studio/deploy-subgraph-studio.mdx b/pages/zh/studio/deploy-subgraph-studio.mdx index 140c591633b5..62f614ab7d15 100644 --- a/pages/zh/studio/deploy-subgraph-studio.mdx +++ b/pages/zh/studio/deploy-subgraph-studio.mdx @@ -2,7 +2,7 @@ title: 将一个子图部署到子图工作室 --- -Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: 你可以通过以下步骤完成: +将一个子图部署到子图工作室是非常简单的。 你可以通过以下步骤完成: - 安装Graph CLI(同时使用yarn和npm)。 - 在子图工作室中创建你的子图 @@ -11,7 +11,7 @@ Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you ## 安装Graph CLI -We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. 以下是安装graph-cli的命令。 这可以用npm或yarn来完成。 +我们使用相同的CLI将子图部署到我们的 [托管服务](https://thegraph.com/hosted-service/) 和[Subgraph Studio](https://thegraph.com/studio/)中。 以下是安装graph-cli的命令。 这可以用npm或yarn来完成。 **用yarn安装:** @@ -27,7 +27,7 @@ npm install -g @graphprotocol/graph-cli ## 在子图工作室中创建你的子图 -Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. 我们建议你阅读我们的[Studio文档](/studio/subgraph-studio)以了解更多这方面的信息。 +在部署你的实际子图之前,你需要在 [子图工作室](https://thegraph.com/studio/)中创建一个子图。 我们建议你阅读我们的[Studio文档](/studio/subgraph-studio)以了解更多这方面的信息。 ## 初始化你的子图 @@ -41,11 +41,11 @@ graph init --studio ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network and abi that you want to query. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. 这样做将在你的本地机器上生成一个新的文件夹,里面有一些基本代码,可以开始在你的子图上工作。 然后,你可以最终确定你的子图,以确保它按预期工作。 +运行`graph init`后,你会被要求输入你想查询的合同地址、网络和abi。 这样做将在你的本地机器上生成一个新的文件夹,里面有一些基本代码,可以开始在你的子图上工作。 然后,你可以最终确定你的子图,以确保它按预期工作。 ## Graph 认证 -Before being able to deploy your subgraph to Subgraph Studio, you need to login to your account within the CLI. To do this, you will need your deploy key that you can find on your "My Subgraphs" page or on your subgraph details page. 要做到这一点,你将需要你的部署密钥,你可以在你的 "我的子图 "页面或子图的详细信息页面上找到。 +在能够将你的子图部署到子图工作室之前,你需要在CLI中登录到你的账户。 要做到这一点,你将需要你的部署密钥,你可以在你的 "我的子图 "页面或子图的详细信息页面上找到。 以下是你需要使用的命令,以从CLI进行认证: @@ -55,7 +55,7 @@ graph auth --studio ## 将一个子图部署到子图工作室 -一旦你准备好了,你可以将你的子图部署到子图工作室。 Once you are ready, you can deploy your subgraph to Subgraph Studio. Doing this won't publish your subgraph to the decentralized network, it will only deploy it to your Studio account where you will be able to test it and update the metadata. +一旦你准备好了,你可以将你的子图部署到子图工作室。 这样做不会将你的子图发布到去中心化的网络中,它只会将它部署到你的Studio账户中,在那里你将能够测试它并更新元数据。 这里是你需要使用的CLI命令,以部署你的子图。 @@ -63,6 +63,6 @@ graph auth --studio graph deploy --studio ``` -After running this command, the CLI will ask for a version label, you can name it however you want, you can use labels such as `0.1` and `0.2` or use letters as well such as `uniswap-v2-0.1` . Those labels will be visible in Graph Explorer and can be used by curators to decide if they want to signal on this version or not, so choose them wisely. 这些标签将在Graph Explorer中可见,并可由策展人用来决定是否要在这个版本上发出信号,所以要明智地选择它们。 +运行这个命令后,CLI会要求提供一个版本标签,你可以随意命名,你可以使用 `0.1`和 `0.2`这样的标签,或者也可以使用字母,如 `uniswap-v2-0.1` . 这些标签将在Graph Explorer中可见,并可由策展人用来决定是否要在这个版本上发出信号,所以要明智地选择它们。 一旦部署完毕,你可以在子图工作室中使用控制面板测试你的子图,如果需要的话,可以部署另一个版本,更新元数据,当你准备好后,将你的子图发布到Graph Explorer。 From b1fbe5a8e39a8c252c5ecceee983a2d6ad4a55e0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:07:59 -0500 Subject: [PATCH 423/432] New translations billing.mdx (Chinese Simplified) --- pages/zh/studio/billing.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/zh/studio/billing.mdx b/pages/zh/studio/billing.mdx index 48332e8bc19a..ce99acd65775 100644 --- a/pages/zh/studio/billing.mdx +++ b/pages/zh/studio/billing.mdx @@ -16,13 +16,13 @@ title: 子图工作室的计费 2. 将GRT和ETH发送到你的钱包里 3. 使用用户界面桥接GRT到Polygon - a) You will receive 0.001 Matic in a few minutes after you send any amount of GRT to the Polygon bridge. You can track the transaction on [Polygonscan](https://polygonscan.com/) by inputting your address into the search bar. 你可以在搜索栏中输入你的地址,在 [Polygonscan](https://polygonscan.com/)上跟踪交易情况。 + a) 在你向Polygon桥发送任何数量的GRT后,你将在几分钟内收到0.001 Matic。 你可以在搜索栏中输入你的地址,在 [Polygonscan](https://polygonscan.com/)上跟踪交易情况。 -4. Add bridged GRT to the billing contract on Polygon. The billing contract address is: [0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). 计费合同地址是:[0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). +4. 在Polygon的计费合同中加入桥接的GRT。 计费合同地址是:[0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). - a) 为了完成第4步,你需要将钱包中的网络切换到Polygon。 a) In order to complete step #4, you'll need to switch your network in your wallet to Polygon. You can add Polygon's network by connecting your wallet and clicking on "Choose Matic (Polygon) Mainnet" [here.](https://chainlist.org/) Once you've added the network, switch it over in your wallet by navigating to the network pill on the top right hand side corner. In Metamask, the network is called **Matic Mainnnet.** 在Metamask中,该网络被称为 **Matic Mainnnet.** + a) 为了完成第4步,你需要将钱包中的网络切换到Polygon。 你可以通过连接你的钱包并点击[这里](https://chainlist.org/) 的 "选择Matic(Polygon)主网 "来添加Polygon的网络。一旦你添加了网络,在你的钱包里通过导航到右上角的网络图标来切换它。 在Metamask中,该网络被称为 **Matic Mainnnet.** -At the end of each week, if you used your API keys, you will receive an invoice based on the query fees you have generated during this period. This invoice will be paid using GRT available in your balance. Query volume is evaluated by the API keys you own. Your balance will be updated after fees are withdrawn. 这张发票将用你余额中的GRT来支付。 查询量是由你拥有的API密钥来评估的。 你的余额将在费用提取后被更新。 +在每个周末,如果你使用了你的API密钥,你将会收到一张基于你在这期间产生的查询费用的发票。 这张发票将用你余额中的GRT来支付。 查询量是由你拥有的API密钥来评估的。 你的余额将在费用提取后被更新。 #### 下面是你如何进行开票的过程: @@ -51,7 +51,7 @@ At the end of each week, if you used your API keys, you will receive an invoice ### 多重签名用户 -Multisigs are smart-contracts that can exist only on the network they have been created, so if you created one on Ethereum Mainnet - it will only exist on Mainnet. Since our billing uses Polygon, if you were to bridge GRT to the multisig address on Polygon the funds would be lost. 由于我们的账单使用Polygon,如果你将GRT桥接到Polygon的多符号地址上,资金就会丢失。 +多重合约是只能存在于它们所创建的网络上的智能合约,所以如果你在以太坊主网上创建了一个--它将只存在于主网上。 由于我们的账单使用Polygon,如果你将GRT桥接到Polygon的多符号地址上,资金就会丢失。 为了克服这个问题,我们创建了 [一个专门的工具](https://multisig-billing.thegraph.com/),它将帮助你用一个标准的钱包/EOA(一个由私钥控制的账户)在我们的计费合同上存入GRT(代表multisig)。 @@ -60,7 +60,7 @@ Multisigs are smart-contracts that can exist only on the network they have been 这个工具将指导你完成以下步骤: 1. 连接你的标准钱包/EOA(这个钱包需要拥有一些ETH以及你要存入的GRT)。 -2. 桥GRT到Polygon。 Bridge GRT to Polygon. You will have to wait 7-8 minutes after the transaction is complete for the bridge transfer to be finalized. +2. 桥GRT到Polygon。 在交易完成后,你需要等待7-8分钟,以便最终完成桥梁转移。 3. 一旦你的GRT在你的Polygon余额中可用,你就可以把它们存入账单合同,同时在`Multisig地址栏` 中指定你要资助的multisig地址。 一旦存款交易得到确认,你就可以回到 [Subgraph Studio](https://thegraph.com/studio/),并与你的Gnosis Safe Multisig连接,以创建API密钥并使用它们来生成查询。 From 7bbaaf436f6b152f03f9d0c6ece24cfe669c0cb6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 17:08:06 -0500 Subject: [PATCH 424/432] New translations curating.mdx (Spanish) --- pages/es/curating.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/curating.mdx b/pages/es/curating.mdx index ddc4e0125b20..425cb5608b6f 100644 --- a/pages/es/curating.mdx +++ b/pages/es/curating.mdx @@ -91,7 +91,7 @@ Se sugiere que no actualices tus subgrafos con demasiada frecuencia. Consulta la Las participaciones de un curador no se pueden "comprar" o "vender" como otros tokens ERC20 con los que seguramente estás familiarizado. Solo pueden anclar (crearse) o quemarse (destruirse) a lo largo de la curva de vinculación de un subgrafo en particular. La cantidad de GRT necesaria para generar una nueva señal y la cantidad de GRT que recibes cuando quemas tu señal existente, está determinada por esa curva de vinculación. Como curador, debes saber que cuando quemas tus acciones de curación para retirar GRT, puedes terminar con más o incluso con menos GRT de los que depositaste en un inicio. -¿Sigues confundido? Still confused? Still confused? Check out our Curation video guide below: +¿Sigues confundido? Te invitamos a echarle un vistazo a nuestra guía en un vídeo que aborda todo sobre la curación:
+ >
diff --git a/pages/vi/delegating.mdx b/pages/vi/delegating.mdx index a18f2be577f6..caa911c5c7ad 100644 --- a/pages/vi/delegating.mdx +++ b/pages/vi/delegating.mdx @@ -2,93 +2,93 @@ title: Delegator --- -Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. +Delegator (Người ủy quyền) không bị phạt cắt tài sản (slash) vì hành vi xấu, nhưng có một khoản thuế đặt cọc đối với Delegator để ngăn cản việc đưa ra quyết định kém có thể làm tổn hại đến tính toàn vẹn của mạng. -## Delegator Guide +## Hướng dẫn Delegator This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: -- The risks of delegating tokens in The Graph Network -- How to calculate expected returns as a delegator -- A Video guide showing the steps to delegate in the Graph Network UI +- Rủi ro của việc ủy quyền token trong Mạng The Graph +- Cách tính lợi nhuận kỳ vọng với tư cách là delegator +- Hướng dẫn bằng video hiển thị các bước để ủy quyền trong Giao diện người dùng Mạng The Graph -## Delegation Risks +## Rủi ro Ủy quyền -Listed below are the main risks of being a delegator in the protocol. +Dưới đây là những rủi ro chính của việc trở thành delegator trong giao thức. -### The delegation fee +### Phí ủy quyền -It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. +Lưu ý quan trọng là mỗi lần bạn ủy quyền, bạn sẽ bị tính phí 0.5%. Nghĩa là nếu bạn đang ủy quyền 1000 GRT, bạn sẽ tự động đốt 5 GRT. -This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. +Nên là, để an toàn, Delegator nên tính toán trước lợi nhuận của họ sẽ như thế nào khi ủy quyền cho Indexer. Ví dụ: Delegator có thể tính toán xem sẽ mất bao nhiêu ngày trước khi họ kiếm lại bù được phần thuế đặt cọc 0.5% cho việc ủy quyền của họ. -### The delegation unbonding period +### Khoảng thời gian bỏ ràng buộc ủy quyền -Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. +Bất cứ khi nào Delegator muốn hủy ủy quyền, token của họ phải chịu khoảng thời gian 28 ngày bỏ ràng buộc ủy quyền. Điều này có nghĩa là họ không thể chuyển token của mình hoặc kiếm bất kỳ phần thưởng nào trong 28 ngày. -One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT. +Một điều nữa cũng cần xem xét là lựa chọn Indexer một cách khôn ngoan. Nếu bạn chọn một Indexer không đáng tin cậy hoặc không hoàn thành tốt công việc, bạn sẽ muốn hủy ủy quyền, khi đó bạn sẽ mất rất nhiều cơ hội kiếm được phần thưởng, cũng tệ như việc đốt đi GRT vậy.
- ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day - unbonding period._ + ![Delegation unbonding](/img/Delegation-Unbonding.png) Lưu ý khoản phí 0.5% trong Giao diện người dùng Ủy quyền, cũng + như khoảng thời gian 28 ngày bỏ ràng buộc ủy quyền.
-### Choosing a trustworthy indexer with a fair reward payout for delegators +### Chọn một indexer đáng tin cậy với phần thưởng hợp lý cho delegator -This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. +Đây là một phần quan trọng cần hiểu. Đầu tiên chúng ta hãy thảo luận về ba giá trị rất quan trọng, đó là các Tham số Ủy quyền. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards. +Phần cắt Thưởng Indexing - Phần cắt thưởng indexing là phần mà Indexer sẽ giữ lại cho họ trong số lượng Thưởng Indexing. Nghĩa là, nếu nó được đặt thành 100%, thì delegator sẽ nhận được 0 phần thưởng indexing. Nếu bạn thấy 80% trong giao diện người dùng, có delegator sẽ nhận được 20%. Một lưu ý quan trọng - trong thời gian đầu của mạng lưới, Thưởng Indexing sẽ chiếm phần lớn trong tổng phần thưởng.
- ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The - middle one is giving delegators 20%. The bottom one is giving delegators ~83%.* + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) Indexer hàng đầu đang trao cho delegator 90% phần thưởng. Những + Indexer tầm trung đang trao cho delegator 20%. Những Indexer dưới cùng đang trao cho delegator khoản 83%.
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. +- Phần cắt Phí Truy vấn - cũng như Phần cắt Thưởng Indexing. Tuy nhiên, điều này đặc biệt dành cho lợi nhuận từ phí truy vấn mà Indexer thu thập. Cần lưu ý rằng khi bắt đầu mạng, lợi nhuận từ phí truy vấn sẽ rất nhỏ so với phần thưởng indexing. Bạn nên chú ý đến mạng lưới để xác định khi nào phí truy vấn trong mạng sẽ bắt đầu đáng kể hơn. -As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. +Như bạn có thể thấy, có rất nhiều suy nghĩ phải cân nhắc khi lựa chọn Indexer phù hợp. Đây là lý do tại sao chúng tôi thực sự khuyên bạn nên khám phá The Graph Discord để xác định ai là Indexers có danh tiếng xã hội và danh tiếng kỹ thuật tốt nhất, để thưởng cho delegator trên cơ sở nhất quán. Nhiều Indexers rất tích cực trong Discord và sẽ sẵn lòng trả lời câu hỏi của bạn. Nhiều người trong số họ đã Indexing trong nhiều tháng testnet và đang cố gắng hết sức để giúp những các delegator kiếm được lợi nhuận tốt, vì nó cải thiện sức khỏe và sự thành công của mạng. -### Calculating delegators expected return +### Tính toán lợi nhuận dự kiến của Delegator -A Delegator has to consider a lot of factors when determining the return. These +Một Delegator phải xem xét rất nhiều yếu tố khi xác định lợi nhuận. Như là -- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Một Delegator có trình độ kỹ thuật cũng có thể xem xét cách mà Indexer sử dụng các token được Ủy quyền khả dụng cho họ. Nếu một indexer không phân bổ tất cả các token khả dụng, họ sẽ không kiếm được lợi nhuận tối đa mà họ có thể dành cho chính họ hoặc Delegator của họ. +- Ngay bây giờ trong mạng lưới, Indexer có thể chọn đóng phân bổ và nhận phần thưởng bất kỳ lúc nào trong khoảng thời gian từ 1 đến 28 ngày. Vì vậy, có thể một Indexer có rất nhiều phần thưởng mà họ chưa thu thập được, và do đó, tổng phần thưởng của họ thấp. Điều này cần được xem xét từ những ngày đầu. -### Considering the query fee cut and indexing fee cut +### Xem xét Phần cắt Phí Truy vấn và Phần cắt Phí indexing -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: +Như được mô tả trong các phần trên, bạn nên chọn một Indexer minh bạch và trung thực về việc thiết lập Phần cắt Phí Truy vấn và Phần cắt Phí Indexing của họ. Delegator cũng nên xem thời gian Cooldown (thời gian chờ) của Tham số để xem họ có bao nhiêu bộ đệm thời gian. Sau khi hoàn thành, việc tính toán số lượng phần thưởng mà delegator nhận được khá đơn giản. Công thức là: -![Delegation Image 3](/img/Delegation-Reward-Formula.png) +![Ảnh Ủy quyền 3](/img/Delegation-Reward-Formula.png) -### Considering the indexers delegation pool +### Xem xét Delegation pool của Indexer -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: +Một điều khác mà Delegator phải xem xét là tỷ lệ Delegation Pool (Nhóm Ủy quyền) mà họ sở hữu. Tất cả phần thưởng ủy quyền được chia sẻ đồng đều, với một sự tái cân bằng đơn giản của nhóm được xác định bởi số tiền Delegator đã gửi vào pool. Việc này cung cấp cho delegator một phần của pool: -![Share formula](/img/Share-Forumla.png) +![Chia sẻ công thức](/img/Share-Forumla.png) -Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. +Sử dụng công thức này, chúng ta có thể thấy rằng một Indexer đang cung cấp chỉ 20% cho Delegator thực sự có thể thực sự trao cho Delegator một phần thưởng thậm chí còn tốt hơn một Indexer đang chia 90% cho Delegator. -A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. +Do đó, Delegator có thể thực hiện phép toán để xác định rằng người Indexer mục cung cấp 20% cho Delegator kia, đang mang lại lợi nhuận tốt hơn. -### Considering the delegation capacity +### Xem xét Delegation Capacity (Năng lực Ủy quyền) -Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Một điều khác cần xem xét là Năng lực Ủy quyền. Hiện tại, Delegation Ratio (Tỷ lệ Ủy quyền) đang được đặt thành 16. Điều này có nghĩa là nếu một Indexer đã stake 1.000.000 GRT, thì Năng lực Ủy quyền của họ là 16.000.000 GRT Token được Ủy quyền mà họ có thể sử dụng trong giao thức. Bất kỳ lượng token được ủy quyền nào vượt quá con số này sẽ làm loãng tất cả phần thưởng Delegator. -Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. +Ví dụ một Indexer có 100.000.000 GRT được ủy quyền cho họ, và năng lực của họ chỉ là 16.000.000 GRT. Điều này nghĩa là 84.000.000 GRT token không được sử dụng để kiếm token. Và toàn bộ Delegator, và cả Indexer, đang kiếm được ít phần thưởng hơn so với mức họ có thể. -Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. +Do đó, delegator phải luôn xem xét Năng lực Ủy quyền của Indexer và cân nhắc nó trong quá trình ra quyết định của họ. -## Video guide for the network UI +## Video hướng dẫn cho giao diện người dùng mạng lưới -This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI. +Hướng dẫn này cung cấp đánh giá đầy đủ về tài liệu này và cách xem xét mọi thứ trong tài liệu này khi tương tác với giao diện người dùng.
+ >
diff --git a/pages/vi/explorer.mdx b/pages/vi/explorer.mdx index a7b8c5204177..f66163c2def8 100644 --- a/pages/vi/explorer.mdx +++ b/pages/vi/explorer.mdx @@ -2,210 +2,211 @@ title: The Graph Explorer --- -Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below): +Chào mừng bạn đến với Graph Explorer, hay như chúng tôi thường gọi, cổng thông tin phi tập trung của bạn vào thế giới subgraphs và dữ liệu mạng. 👩🏽‍🚀 Graph Explorer bao gồm nhiều phần để bạn có thể tương tác với các nhà phát triển subgraph khác, nhà phát triển dapp, Curators, Indexers, và Delegators. Để biết tổng quan chung về Graph Explorer, hãy xem video bên dưới (hoặc tiếp tục đọc bên dưới):
+ >
## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. +Điều đầu tiên, nếu bạn vừa hoàn thành việc triển khai và xuất bản subgraph của mình trong Subgraph Studio, thì tab Subgraphs ở trên cùng của thanh điều hướng là nơi để xem các subgraph đã hoàn thành của riêng bạn (và các subgraph của những người khác) trên mạng phi tập trung. Tại đây, bạn sẽ có thể tìm thấy chính xác subgraph mà bạn đang tìm kiếm dựa trên ngày tạo, lượng tín hiệu hoặc tên. -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Explorer Image 1 +](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +Khi bạn nhấp vào một subgraph, bạn sẽ có thể thử các truy vấn trong playground và có thể tận dụng chi tiết mạng để đưa ra quyết định sáng suốt. Bạn cũng sẽ có thể báo hiệu GRT trên subgraph của riêng bạn hoặc các subgraph của người khác để làm cho các indexer nhận thức được tầm quan trọng và chất lượng của nó. Điều này rất quan trọng vì việc báo hiệu trên một subgraph khuyến khích nó được lập chỉ mục, có nghĩa là nó sẽ xuất hiện trên mạng để cuối cùng phục vụ các truy vấn. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +Trên trang chuyên dụng của mỗi subgraph, một số chi tiết được hiển thị. Bao gồm: -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- Báo hiệu / Hủy báo hiệu trên subgraph +- Xem thêm chi tiết như biểu đồ, ID triển khai hiện tại và siêu dữ liệu khác +- Chuyển đổi giữa các phiên bản để khám phá các lần bản trước đây của subgraph +- Truy vấn subgraph qua GraphQL +- Thử subgraph trong playground +- Xem các Indexers đang lập chỉ mục trên một subgraph nhất định +- Thống kê Subgraph (phân bổ, Curators, v.v.) +- Xem pháp nhân đã xuất bản subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## Participants +## Những người tham gia -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. +Trong tab này, bạn sẽ có được cái nhìn tổng thể về tất cả những người đang tham gia vào các hoạt động mạng, chẳng hạn như Indexers, Delegators, và Curators. Dưới đây, chúng tôi sẽ đi vào đánh giá sâu về ý nghĩa của mỗi tab đối với bạn. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Hãy bắt đầu với Indexers (Người lập chỉ mục). Các Indexers là xương sống của giao thức, là những người đóng góp vào các subgraph, lập chỉ mục chúng và phục vụ các truy vấn cho bất kỳ ai sử dụng subgraph. Trong bảng Indexers, bạn sẽ có thể thấy các thông số ủy quyền của Indexer, lượng stake của họ, số lượng họ đã stake cho mỗi subgraph và doanh thu mà họ đã kiếm được từ phí truy vấn và phần thưởng indexing. Đi sâu hơn: -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- Phần Cắt Phí Truy vấn - là % hoàn phí truy vấn mà Indexer giữ lại khi ăn chia với Delegators +- Phần Cắt Thưởng Hiệu quả - phần thưởng indexing được áp dụng cho nhóm ủy quyền (delegation pool). Nếu là âm, điều đó có nghĩa là Indexer đang cho đi một phần phần thưởng của họ. Nếu là dương, điều đó có nghĩa là Indexer đang giữ lại một số phần thưởng của họ +- Cooldown Remaining (Thời gian chờ còn lại) - thời gian còn lại cho đến khi Indexer có thể thay đổi các thông số ủy quyền ở trên. Thời gian chờ Cooldown được Indexers thiết lập khi họ cập nhật thông số ủy quyền của mình +- Được sở hữu - Đây là tiền stake Indexer đã nạp vào, có thể bị phạt cắt giảm (slashed) nếu có hành vi độc hại hoặc không chính xác +- Được ủy quyền - Lượng stake từ các Delegator có thể được Indexer phân bổ, nhưng không thể bị phạt cắt giảm +- Được phân bổ - phần stake mà Indexers đang tích cực phân bổ cho các subgraph mà họ đang lập chỉ mục +- Năng lực Ủy quyền khả dụng - số token stake được ủy quyền mà Indexers vẫn có thể nhận được trước khi họ trở nên ủy quyền quá mức (overdelegated) +- Max Delegation Capacity (Năng lực Ủy quyền Tối đa) - số tiền token stake được ủy quyền tối đa mà Indexer có thể chấp nhận một cách hiệu quả. Số tiền stake được ủy quyền vượt quá con số này sẽ không thể được sử dụng để phân bổ hoặc tính toán phần thưởng. +- Phí Truy vấn - đây là tổng số phí mà người dùng cuối đã trả cho các truy vấn từ Indexer đến hiện tại +- Thưởng Indexer - đây là tổng phần thưởng indexer mà Indexer và các Delegator của họ kiếm được cho đến hiện tại. Phần thưởng Indexer được trả thông qua việc phát hành GRT. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers có thể kiếm được cả phí truy vấn và phần thưởng indexing. Về mặt chức năng, điều này xảy ra khi những người tham gia mạng ủy quyền GRT cho Indexer. Điều này cho phép Indexers nhận phí truy vấn và phần thưởng tùy thuộc vào thông số Indexer của họ. Các thông số Indexing được cài đặt bằng cách nhấp vào phía bên phải của bảng hoặc bằng cách truy cập hồ sơ của Indexer và nhấp vào nút “Ủy quyền”. -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Để tìm hiểu thêm về cách trở thành một Indexer, bạn có thể xem qua [tài liệu chính thức](/indexing) hoặc [Hướng dẫn về Indexer của Học viện The Graph.](https://thegraph.academy/delegators/choosing-indexers/) ![Indexing details pane](/img/Indexing-Details-Pane.png) ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators (Người Giám tuyển) phân tích các subgraph để xác định subgraph nào có chất lượng cao nhất. Một khi Curator tìm thấy một subgraph có khả năng hấp dẫn, họ có thể curate nó bằng cách báo hiệu trên đường cong liên kết (bonding curve) của nó. Khi làm như vậy, Curator sẽ cho Indexer biết những subgraph nào có chất lượng cao và nên được lập chỉ mục. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +Curators có thể là các thành viên cộng đồng, người tiêu dùng dữ liệu hoặc thậm chí là nhà phát triển subgraph, những người báo hiệu trên subgraph của chính họ bằng cách nạp token GRT vào một đường cong liên kết. Bằng cách nạp GRT, Curator đúc ra cổ phần curation của một subgraph. Kết quả là, Curators có đủ điều kiện để kiếm một phần phí truy vấn mà subgraph mà họ đã báo hiệu tạo ra. Đường cong liên kết khuyến khích Curators quản lý các nguồn dữ liệu chất lượng cao nhất. Bảng Curator trong phần này sẽ cho phép bạn xem: -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- Ngày Curator bắt đầu curate +- Số GRT đã được nạp +- Số cổ phần một Curator sở hữu ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) +Nếu muốn tìm hiểu thêm về vai trò Curator, bạn có thể thực hiện việc này bằng cách truy cập các liên kết sau của [Học viện The Graph](https://thegraph.academy/curators/) hoặc [tài liệu chính thức.](/curating) ### 3. Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Delegators (Người Ủy quyền) đóng một vai trò quan trọng trong việc duy trì tính bảo mật và phân quyền của Mạng The Graph. Họ tham gia vào mạng bằng cách ủy quyền (tức là "staking") token GRT cho một hoặc nhiều indexer. Không có những Delegator, các Indexer ít có khả năng kiếm được phần thưởng và phí đáng kể. Do đó, Indexer tìm cách thu hút Delegator bằng cách cung cấp cho họ một phần của phần thưởng lập chỉ mục và phí truy vấn mà họ kiếm được. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! +Delegator, đổi lại, chọn Indexer dựa trên một số biến số khác nhau, chẳng hạn như hiệu suất trong quá khứ, tỷ lệ phần thưởng lập chỉ mục và phần cắt phí truy vấn. Danh tiếng trong cộng đồng cũng có thể đóng vai trò quan trọng trong việc này! Bạn nên kết nối với những các indexer đã chọn qua[Discord của The Graph](https://thegraph.com/discord) hoặc [Forum The Graph](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +Bảng Delegators sẽ cho phép bạn xem các Delegator đang hoạt động trong cộng đồng, cũng như các chỉ số như: -- The number of Indexers a Delegator is delegating towards -- A Delegator’s original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated at +- Số lượng Indexers mà một Delegator đang ủy quyền cho +- Ủy quyền ban đầu của Delegator +- Phần thưởng họ đã tích lũy nhưng chưa rút khỏi giao thức +- Phần thưởng đã ghi nhận ra mà họ rút khỏi giao thức +- Tổng lượng GRT mà họ hiện có trong giao thức +- Ngày họ ủy quyền lần cuối cùng -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). +Nếu bạn muốn tìm hiểu thêm về cách trở thành một Delegator, đừng tìm đâu xa! Tất cả những gì bạn phải làm là đi đến [tài liệu chính thức](/delegating) hoặc [Học viện The Graph](https://docs.thegraph.academy/network/delegators). ## Mạng lưới -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +Trong phần Mạng lưới, bạn sẽ thấy các KPI toàn cầu cũng như khả năng chuyển sang cơ sở từng epoch và phân tích các chỉ số mạng chi tiết hơn. Những chi tiết này sẽ cho bạn biết mạng hoạt động như thế nào theo thời gian. -### Activity +### Hoạt động -The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +Phần hoạt động có tất cả các chỉ số mạng hiện tại cũng như một số chỉ số tích lũy theo thời gian. Ở đây bạn có thể thấy những thứ như: -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- Tổng stake mạng hiện tại +- Phần chia stake giữa Indexer và các Delegator của họ +- Tổng cung GRT, lượng được đúc và đốt kể từ khi mạng lưới thành lập +- Tổng phần thưởng Indexing kể từ khi bắt đầu giao thức +- Các thông số giao thức như phần thưởng curation, tỷ lệ lạm phát,... +- Phần thưởng và phí của epoch hiện tại -A few key details that are worth mentioning: +Một vài chi tiết quan trọng đáng được đề cập: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Phí truy vấn đại diện cho phí do người tiêu dùng tạo ra**, và chúng có thể được Indexer yêu cầu (hoặc không) sau một khoảng thời gian ít nhất 7 epochs (xem bên dưới) sau khi việc phân bổ của họ cho các subgraph đã được đóng lại và dữ liệu mà chúng cung cấp đã được người tiêu dùng xác thực. +- **Phần thưởng Indexing đại diện cho số phần thưởng mà Indexer đã yêu cầu được từ việc phát hành mạng trong epoch đó.** Mặc dù việc phát hành giao thức đã được cố định, nhưng phần thưởng chỉ nhận được sau khi Indexer đóng phân bổ của họ cho các subgraph mà họ đã lập chỉ mục. Do đó, số lượng phần thưởng theo từng epoch khác nhau (nghĩa là trong một số epoch, Indexer có thể đã đóng chung các phân bổ đã mở trong nhiều ngày). ![Explorer Image 8](/img/Network-Stats.png) ### Epochs -In the Epochs section you can analyse on a per-epoch basis, metrics such as: +Trong phần Epochs, bạn có thể phân tích trên cơ sở từng epoch, các chỉ số như: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. +- Khối bắt đầu hoặc kết thúc của Epoch +- Phí truy vấn được tạo và phần thưởng indexing được thu thập trong một epoch cụ thể +- Trạng thái Epoch, đề cập đến việc thu và phân phối phí truy vấn và có thể có các trạng thái khác nhau: + - Epoch đang hoạt động là epoch mà Indexer hiện đang phân bổ cổ phần và thu phí truy vấn + - Epoch đang giải quyết là những epoch mà các kênh trạng thái đang được giải quyết. Điều này có nghĩa là Indexers có thể bị phạt cắt giảm nếu người tiêu dùng công khai tranh chấp chống lại họ. + - Epoch đang phân phối là epoch trong đó các kênh trạng thái cho các epoch đang được giải quyết và Indexer có thể yêu cầu hoàn phí truy vấn của họ. + - Epoch được hoàn tất là những epoch không còn khoản hoàn phí truy vấn nào để Indexer yêu cầu, do đó sẽ được hoàn thiện. ![Explorer Image 9](/img/Epoch-Stats.png) -## Your User Profile +## Hồ sơ Người dùng của bạn -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Nãy giờ chúng ta đã nói về các thống kê mạng, hãy chuyển sang hồ sơ cá nhân của bạn. Hồ sơ người dùng cá nhân của bạn là nơi để bạn xem hoạt động mạng của mình, bất kể bạn đang tham gia mạng như thế nào. Ví Ethereum của bạn sẽ hoạt động như hồ sơ người dùng của bạn và với Trang Tổng quan Người dùng, bạn sẽ có thể thấy: -### Profile Overview +### Tổng quan Hồ sơ -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +Đây là nơi bạn có thể xem bất kỳ hành động hiện tại nào bạn đã thực hiện. Đây cũng là nơi bạn có thể tìm thấy thông tin hồ sơ, mô tả và trang web của mình (nếu bạn đã thêm). ![Explorer Image 10](/img/Profile-Overview.png) -### Subgraphs Tab +### Tab Subgraphs -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +Nếu bạn nhấp vào tab Subgraphs, bạn sẽ thấy các subgraph đã xuất bản của mình. Điều này sẽ không bao gồm bất kỳ subgraph nào được triển khai với CLI cho mục đích thử nghiệm - các subgraph sẽ chỉ hiển thị khi chúng được xuất bản lên mạng phi tập trung. ![Explorer Image 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### Tab Indexing -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +Nếu bạn nhấp vào tab Indexing, bạn sẽ tìm thấy một bảng với tất cả các phân bổ hiện hoạt và lịch sử cho các subgraph, cũng như các biểu đồ mà bạn có thể phân tích và xem hiệu suất trước đây của mình với tư cách là Indexer. -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +Phần này cũng sẽ bao gồm thông tin chi tiết về phần thưởng Indexer ròng của bạn và phí truy vấn ròng. Bạn sẽ thấy các số liệu sau: -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- Stake được ủy quyền - phần stake từ Delegator có thể được bạn phân bổ nhưng không thể bị phạt cắt giảm (slashed) +- Tổng Phí Truy vấn - tổng phí mà người dùng đã trả cho các truy vấn do bạn phục vụ theo thời gian +- Phần thưởng Indexer - tổng số phần thưởng Indexer bạn đã nhận được, tính bằng GRT +- Phần Cắt Phí - lượng % hoàn phí phí truy vấn mà bạn sẽ giữ lại khi ăn chia với Delegator +- Phần Cắt Thưởng - lượng % phần thưởng Indexer mà bạn sẽ giữ lại khi ăn chia với Delegator +- Được sở hữu - số stake đã nạp của bạn, có thể bị phạt cắt giảm (slashed) vì hành vi độc hại hoặc không chính xác ![Explorer Image 12](/img/Indexer-Stats.png) -### Delegating Tab +### Tab Delegating -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegator rất quan trọng đối với Mạng The Graph. Một Delegator phải sử dụng kiến thức của họ để chọn một Indexer sẽ mang lại lợi nhuận lành mạnh từ các phần thưởng. Tại đây, bạn có thể tìm thấy thông tin chi tiết về các ủy quyền đang hoạt động và trong lịch sử của mình, cùng với các chỉ số của Indexer mà bạn đã ủy quyền. -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +Trong nửa đầu của trang, bạn có thể thấy biểu đồ ủy quyền của mình, cũng như biểu đồ chỉ có phần thưởng. Ở bên trái, bạn có thể thấy các KPI phản ánh các chỉ số ủy quyền hiện tại của bạn. -The Delegator metrics you’ll see here in this tab include: +Các chỉ số Delegator mà bạn sẽ thấy ở đây trong tab này bao gồm: -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- Tổng pphần thưởng ủy quyền +- Tổng số phần thưởng chưa ghi nhận +- Tổng số phần thưởng đã ghi được -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +Trong nửa sau của trang, bạn có bảng ủy quyền. Tại đây, bạn có thể thấy các Indexer mà bạn đã ủy quyền, cũng như thông tin chi tiết của chúng (chẳng hạn như phần cắt thưởng, thời gian chờ, v.v.). -With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. +Với các nút ở bên phải của bảng, bạn có thể quản lý ủy quyền của mình - ủy quyền nhiều hơn, hủy bỏ hoặc rút lại ủy quyền của bạn sau khoảng thời gian rã đông (thawing period). -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +Lưu ý rằng biểu đồ này có thể cuộn theo chiều ngang, vì vậy nếu bạn cuộn hết cỡ sang bên phải, bạn cũng có thể thấy trạng thái ủy quyền của mình (ủy quyền, hủy ủy quyền, có thể rút lại). ![Explorer Image 13](/img/Delegation-Stats.png) -### Curating Tab +### Tab Curating -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +Trong tab Curation, bạn sẽ tìm thấy tất cả các subgraph mà bạn đang báo hiệu (do đó cho phép bạn nhận phí truy vấn). Báo hiệu cho phép Curator đánh dấu cho Indexer biết những subgraph nào có giá trị và đáng tin cậy, do đó báo hiệu rằng chúng cần được lập chỉ mục. -Within this tab, you’ll find an overview of: +Trong tab này, bạn sẽ tìm thấy tổng quan về: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph -- Updated at date details +- Tất cả các subgraph bạn đang quản lý với các chi tiết về tín hiệu +- Tổng cổ phần trên mỗi subgraph +- Phần thưởng truy vấn cho mỗi subgraph +- Chi tiết ngày được cập nhật ![Explorer Image 14](/img/Curation-Stats.png) -## Your Profile Settings +## Cài đặt Hồ sơ của bạn -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +Trong hồ sơ người dùng của mình, bạn sẽ có thể quản lý chi tiết hồ sơ cá nhân của mình (như thiết lập tên ENS). Nếu bạn là Indexer, bạn thậm chí có nhiều quyền truy cập hơn vào các cài đặt trong tầm tay của mình. Trong hồ sơ người dùng của mình, bạn sẽ có thể thiết lập các tham số ủy quyền và operator của mình. -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- Operators (Người vận hành) thực hiện các hành động được hạn chế trong giao thức thay mặt cho Indexer, chẳng hạn như mở và đóng phân bổ. Operators thường là các địa chỉ Ethereum khác, tách biệt với ví đặt staking của họ, với quyền truy cập được kiểm soát vào mạng mà Indexer có thể cài đặt cá nhân +- Tham số ủy quyền cho phép bạn kiểm soát việc phân phối GRT giữa bạn và các Delegator của bạn. ![Explorer Image 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +Là cổng thông tin chính thức của bạn vào thế giới dữ liệu phi tập trung, Graph Explorer cho phép bạn thực hiện nhiều hành động khác nhau, bất kể vai trò của bạn trong mạng. Bạn có thể truy cập cài đặt hồ sơ của mình bằng cách mở menu thả xuống bên cạnh địa chỉ của bạn, sau đó nhấp vào nút Cài đặt.
![Wallet details](/img/Wallet-Details.png)
diff --git a/pages/vi/indexing.mdx b/pages/vi/indexing.mdx index 090b1be2b226..b543436f0049 100644 --- a/pages/vi/indexing.mdx +++ b/pages/vi/indexing.mdx @@ -4,47 +4,47 @@ title: Indexer import { Difficulty } from '@/components' -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. +Indexer là những người vận hành node (node operator) trong Mạng The Graph có stake Graph Token (GRT) để cung cấp các dịch vụ indexing và xử lý truy vấn. Indexers kiếm được phí truy vấn và phần thưởng indexing cho các dịch vụ của họ. Họ cũng kiếm được tiền từ Rebate Pool (Pool Hoàn phí) được chia sẻ với tất cả những người đóng góp trong mạng tỷ lệ thuận với công việc của họ, tuân theo Chức năng Rebate Cobbs-Douglas. -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. +GRT được stake trong giao thức sẽ phải trải qua một khoảng thời gian chờ "tan băng" (thawing period) và có thể bị cắt nếu Indexer có ác ý và cung cấp dữ liệu không chính xác cho các ứng dụng hoặc nếu họ index không chính xác. Indexer cũng có thể được ủy quyền stake từ Delegator, để đóng góp vào mạng. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Indexer chọn các subgraph để index dựa trên tín hiệu curation của subgraph, trong đó Curator stake GRT để chỉ ra subgraph nào có chất lượng cao và cần được ưu tiên. Bên tiêu dùng (ví dụ: ứng dụng) cũng có thể đặt các tham số (parameter) mà Indexer xử lý các truy vấn cho các subgraph của họ và đặt các tùy chọn cho việc định giá phí truy vấn. -## FAQ +## CÂU HỎI THƯỜNG GẶP -### What is the minimum stake required to be an indexer on the network? +### Lượng stake tối thiểu cần thiết để trở thành một indexer trên mạng là bao nhiêu? -The minimum stake for an indexer is currently set to 100K GRT. +Lượng stake tối thiểu cho một indexer hiện được đặt là 100K GRT. -### What are the revenue streams for an indexer? +### Các nguồn doanh thu cho indexer là gì? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Hoàn phí truy vấn** - Thanh toán cho việc phục vụ các truy vấn trên mạng. Các khoản thanh toán này được dàn xếp thông qua các state channel giữa indexer và cổng. Mỗi yêu cầu truy vấn từ một cổng chứa một khoản thanh toán và phản hồi tương ứng là bằng chứng về tính hợp lệ của kết quả truy vấn. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. +**Phần thưởng Indexing** - Được tạo ra thông qua lạm phát trên toàn giao thức hàng năm 3%, phần thưởng indexing được phân phối cho các indexer đang lập chỉ mục các triển khai subgraph cho mạng lưới. -### How are rewards distributed? +### Phần thưởng được phân phối như thế nào? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Phần thưởng Indexing đến từ lạm phát giao thức được đặt thành 3% phát hành hàng năm. Chúng được phân phối trên các subgraph dựa trên tỷ lệ của tất cả các tín hiệu curation trên mỗi subgraph, sau đó được phân phối theo tỷ lệ cho các indexers dựa trên số stake được phân bổ của họ trên subgraph đó. **Việc phân bổ phải được kết thúc với bằng chứng lập chỉ mục (proof of indexing - POI) hợp lệ đáp ứng các tiêu chuẩn do điều lệ trọng tài đặt ra để đủ điều kiện nhận phần thưởng** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). +Nhiều công cụ đã được cộng đồng tạo ra để tính toán phần thưởng; bạn sẽ tìm thấy một bộ sưu tập của chúng được sắp xếp trong [Bộ sưu tập Hướng dẫn cộng đồng](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Bạn cũng có thể tìm thấy danh sách cập nhật mới nhất các công cụ trong các kênh #delegators và #indexers trên [server Discord](https://discord.gg/vtvv7FP). -### What is a proof of indexing (POI)? +### Bằng chứng lập chỉ mục (proof of indexing - POI) là gì? -POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POI được sử dụng trong mạng để xác minh rằng một indexer đang lập chỉ mục các subgraph mà họ đã phân bổ. POI cho khối đầu tiên của epoch hiện tại phải được gửi khi kết thúc phân bổ cho phân bổ đó để đủ điều kiện nhận phần thưởng indexing. POI cho một khối là một thông báo cho tất cả các giao dịch lưu trữ thực thể để triển khai một subgraph cụ thể lên đến và bao gồm khối đó. -### When are indexing rewards distributed? +### Khi nào Phần thưởng indexing được phân phối? -Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Việc phân bổ liên tục tích lũy phần thưởng khi chúng đang hoạt động. Phần thưởng được thu thập bởi các indexer và phân phối bất cứ khi nào việc phân bổ của họ bị đóng lại. Điều đó xảy ra theo cách thủ công, bất cứ khi nào indexer muốn buộc đóng chúng hoặc sau 28 epoch, delegator có thể đóng phân bổ cho indexer, nhưng điều này dẫn đến không có phần thưởng nào được tạo ra. 28 epoch là thời gian tồn tại của phân bổ tối đa (hiện tại, một epoch kéo dài trong ~ 24 giờ). -### Can pending indexer rewards be monitored? +### Có thể giám sát phần thưởng indexer đang chờ xử lý không? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. +Hợp đồng RewardsManager có có một chức năng [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) chỉ đọc có thể được sử dụng để kiểm tra phần thưởng đang chờ để phân bổ cụ thể. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Nhiều trang tổng quan (dashboard) do cộng đồng tạo bao gồm các giá trị phần thưởng đang chờ xử lý và bạn có thể dễ dàng kiểm tra chúng theo cách thủ công bằng cách làm theo các bước sau: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Truy vấn [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) để nhận ID cho tất cả phần phân bổ đang hoạt động: ```graphql query indexerAllocations { @@ -60,135 +60,319 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: - -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) - -* To call `getRewards()`: - - Expand the **10. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. - -### What are disputes and where can I view them? - -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. - -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. - -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. - -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. - -### What are query fee rebates and when are they distributed? - -Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. - -Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. - -### What is query fee cut and indexing reward cut? - -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. - -- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. - -- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. - -### How do indexers know which subgraphs to index? - -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: - -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. - -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. - -- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. - -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. - -### What are the hardware requirements? - -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. - -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | - -### What are some basic security precautions an indexer should take? - -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. - -- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. - -## Infrastructure - -At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. - -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. - -- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. - -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. - -- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. - -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. - -### Ports overview - -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. +Sử dụng Etherscan để gọi `getRewards()`: + +- Điều hướng đến [giao diện Etherscan đến hợp đồng Rewards](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) + +* Để gọi `getRewards()`: + - Mở rộng **10. getRewards** thả xuống. + - Nhập **allocationID** trong đầu vào. + - Nhấn **Nút** Truy vấn. + +### Tranh chấp là gì và tôi có thể xem chúng ở đâu? + +Các truy vấn và phần phân bổ của Indexer đều có thể bị tranh chấp trên The Graph trong thời gian tranh chấp. Thời hạn tranh chấp khác nhau, tùy thuộc vào loại tranh chấp. Truy vấn / chứng thực có cửa sổ tranh chấp 7 epoch (kỷ nguyên), trong khi phần phân bổ có 56 epoch. Sau khi các giai đoạn này trôi qua, không thể mở các tranh chấp đối với phần phân bổ hoặc truy vấn. Khi một tranh chấp được mở ra, các Fisherman yêu cầu một khoản stake tối thiểu là 10.000 GRT, sẽ bị khóa cho đến khi tranh chấp được hoàn tất và giải pháp đã được đưa ra. Fisherman là bất kỳ người tham gia mạng nào mà đã mở ra tranh chấp. + +Tranh chấp có **ba** kết quả có thể xảy ra, phần tiền gửi của Fisherman cũng vậy. + +- Nếu tranh chấp bị từ chối, GRT do Fisherman gửi sẽ bị đốt, và Indexer tranh chấp sẽ không bị phạt cắt giảm (slashed). +- Nếu tranh chấp được giải quyết dưới dạng hòa, tiền gửi của Fisherman sẽ được trả lại, và Indexer bị tranh chấp sẽ không bị phạt cắt giảm (slashed). +- Nếu tranh chấp được chấp nhận, lượng GRT do Fisherman đã gửi sẽ được trả lại, Indexer bị tranh chấp sẽ bị cắt và Fisherman sẽ kiếm được 50% GRT đã bị phạt cắt giảm (slashed). + +Tranh chấp có thể được xem trong giao diện người dùng trong trang hồ sơ của Indexer trong mục `Tranh chấp`. + +### Các khoản hoàn phí truy vấn là gì và chúng được phân phối khi nào? + +Phí truy vấn được cổng thu thập bất cứ khi nào một phần phân bổ được đóng và được tích lũy trong pool hoàn phí truy vấn của subgraph. Pool hoàn phí được thiết kế để khuyến khích Indexer phân bổ stake theo tỷ lệ thô với số phí truy vấn mà họ kiếm được cho mạng. Phần phí truy vấn trong pool được phân bổ cho một indexer cụ thể được tính bằng cách sử dụng Hàm Sản xuất Cobbs-Douglas; số tiền được phân phối cho mỗi indexer là một chức năng của phần đóng góp của họ cho pool và việc phân bổ stake của họ trên subgraph. + +Khi một phần phân bổ đã được đóng và thời gian tranh chấp đã qua, indexer sẽ có thể nhận các khoản hoàn phí. Khi yêu cầu, các khoản hoàn phí truy vấn được phân phối cho indexer và delegator của họ dựa trên mức cắt giảm phí truy vấn và tỷ lệ pool ủy quyền (delegation). + +### Cắt giảm phí truy vấn và cắt giảm phần thưởng indexing là gì? + +Giá trị `queryFeeCut` và `indexingRewardCut` là các tham số delegation mà Indexer có thể đặt cùng với cooldownBlocks để kiểm soát việc phân phối GRT giữa indexer và delegator của họ. Xem các bước cuối cùng trong [Staking trong Giao thức](/indexing#stake-in-the-protocol) để được hướng dẫn về cách thiết lập các tham số delegation. + +- **queryFeeCut** - % hoàn phí truy vấn được tích lũy trên một subgraph sẽ được phân phối cho indexer. Nếu thông số này được đặt là 95%, indexer sẽ nhận được 95% của pool hoàn phí truy vấn khi một phần phân bổ được yêu cầu với 5% còn lại sẽ được chuyển cho delegator. + +- **indexingRewardCut** - % phần thưởng indexing được tích lũy trên một subgraph sẽ được phân phối cho indexer. Nếu thông số này được đặt là 95%, indexer sẽ nhận được 95% của pool phần thưởng indexing khi một phần phân bổ được đóng và các delegator sẽ chia 5% còn lại. + +### Làm thế nào để indexer biết những subgraph nào cần index? + +Indexer có thể tự phân biệt bản thân bằng cách áp dụng các kỹ thuật nâng cao để đưa ra quyết định index subgraph nhưng để đưa ra ý tưởng chung, chúng ta sẽ thảo luận một số số liệu chính được sử dụng để đánh giá các subgraph trong mạng: + +- **Tín hiệu curation** - Tỷ lệ tín hiệu curation mạng được áp dụng cho một subgraph cụ thể là một chỉ báo tốt về mức độ quan tâm đến subgraph đó, đặc biệt là trong giai đoạn khởi động khi khối lượng truy vấn đang tăng lên. + +- **Phí truy vấn đã thu** - Dữ liệu lịch sử về khối lượng phí truy vấn được thu thập cho một subgraph cụ thể là một chỉ báo tốt về nhu cầu trong tương lai. + +- **Số tiền được stake** - Việc theo dõi hành vi của những indexer khác hoặc xem xét tỷ lệ tổng stake được phân bổ cho subgraph cụ thể có thể cho phép indexer giám sát phía nguồn cung cho các truy vấn subgraph để xác định các subgraph mà mạng đang thể hiện sự tin cậy hoặc các subgraph có thể cho thấy nhu cầu nguồn cung nhiều hơn. + +- **Subgraph không có phần thưởng indexing** - Một số subgraph không tạo ra phần thưởng indexing chủ yếu vì chúng đang sử dụng các tính năng không được hỗ trợ như IPFS hoặc vì chúng đang truy vấn một mạng khác bên ngoài mainnet. Bạn sẽ thấy một thông báo trên một subgraph nếu nó không tạo ra phần thưởng indexing. + +### Có các yêu cầu gì về phần cứng (hardware)? + +
    +
  • + Nhỏ - Đủ để bắt đầu index một số subgraph, có thể sẽ cần được mở rộng. +
  • +
  • + Tiêu chuẩn - Thiết lập mặc định, đây là những gì được sử dụng trong bản kê khai (manifest) triển khai mẫu + k8s/terraform. +
  • +
  • + Trung bình - Công cụ indexing production hỗ trợ 100 đồ subgraph và 200-500 yêu cầu mỗi giây. +
  • +
  • + Lớn - Được chuẩn bị để index tất cả các subgraph hiện đang được sử dụng và phục vụ các yêu cầu cho lưu lượng + truy cập liên quan. +
  • +
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Thiết lập +
Postgres
+
(CPUs)
+
+
Postgres
+
(bộ nhớ tính bằng GB)
+
+
Postgres
+
(đĩa tính bằng TBs)
+
+
VMs
+
(CPUs)
+
+
VMs
+
(bộ nhớ tính bằng GB)
+
Nhỏ481416
Tiêu chuẩn83011248
Trung bình166423264
Lớn724683,548184
+
+ +### Một số biện pháp phòng ngừa bảo mật cơ bản mà indexer nên thực hiện là gì? + +- **Ví Operator** - Thiết lập ví của operator là một biện pháp phòng ngừa quan trọng vì nó cho phép indexer duy trì sự tách biệt giữa các khóa kiểm soát stake của họ và những khóa kiểm soát hoạt động hàng ngày. Xem [Stake trong Giao thức](/indexing#stake-in-the-protocol) để được hướng dẫn. + +- **Tường lửa** - Chỉ dịch vụ indexer cần được hiển thị công khai và cần đặc biệt chú ý đến việc khóa các cổng quản trị và quyền truy cập cơ sở dữ liệu: điểm cuối The Graph Node JSON-RPC (cổng mặc định: 8030), điểm cuối API quản lý indexer (cổng mặc định: 18000), và điểm cuối cơ sở dữ liệu Postgres (cổng mặc định: 5432) không được để lộ. + +## Cơ sở hạ tầng + +Tại trung tâm của cơ sở hạ tầng của indexer là Graph Node theo dõi Ethereum, trích xuất và tải dữ liệu theo định nghĩa subgraph và phục vụ nó như một [GraphQL API](/about/introduction#how-the-graph-works). Graph Node cần được kết nối với điểm cuối node Ethereum EVM và node IPFS để tìm nguồn cung cấp dữ liệu; một cơ sở dữ liệu PostgreSQL cho kho lưu trữ của nó; và các thành phần indexer tạo điều kiện cho các tương tác của nó với mạng. + +- **Cơ sở dữ liệu PostgreSQLPostgreSQL** - Kho lưu trữ chính cho Graph Node, đây là nơi lưu trữ dữ liệu subgraph. Dịch vụ indexer và đại lý cũng sử dụng cơ sở dữ liệu để lưu trữ dữ liệu kênh trạng thái (state channel), mô hình chi phí và quy tắc indexing. + +- **Điểm cuối Ethereum** - Một điểm cuối cho thấy API Ethereum JSON-RPC. Điều này có thể ở dạng một ứng dụng khách Ethereum duy nhất hoặc nó có thể là một thiết lập phức tạp hơn để tải số dư trên nhiều máy khách. Điều quan trọng cần lưu ý là các subgraph nhất định sẽ yêu cầu các khả năng cụ thể của ứng dụng khách Ethereum như chế độ lưu trữ và API truy tìm. + +- **IPFS node (phiên bản nhỏ hơn 5)** - Siêu dữ liệu triển khai subgraph được lưu trữ trên mạng IPFS. Node The Graph chủ yếu truy cập vào node IPFS trong quá trình triển khai subgraph để tìm nạp tệp kê khai (manifest) subgraph và tất cả các tệp được liên kết. Indexers mạng lưới không cần lưu trữ node IPFS của riêng họ, một node IPFS cho mạng lưới được lưu trữ tại https://ipfs.network.thegraph.com. + +- **Dịch vụ Indexer** - Xử lý tất cả các giao tiếp bên ngoài được yêu cầu với mạng. Chia sẻ các mô hình chi phí và trạng thái indexing, chuyển các yêu cầu truy vấn từ các cổng đến Node The Graph và quản lý các khoản thanh toán truy vấn qua các kênh trạng thái với cổng. + +- **Đại lý Indexer ** - Tạo điều kiện thuận lợi cho các tương tác của Indexer trên blockchain bao gồm những việc như đăng ký trên mạng lưới, quản lý triển khai subgraph đối với Node The Graph của nó và quản lý phân bổ. Máy chủ số liệu Prometheus - Các thành phần Node The Graph và Indexer ghi các số liệu của chúng vào máy chủ số liệu. + +Lưu ý: Để hỗ trợ mở rộng quy mô nhanh, bạn nên tách các mối quan tâm về truy vấn và indexing giữa các nhóm node khác nhau: node truy vấn và node index. + +### Tổng quan về các cổng + +> **Quan trọng**: Hãy cẩn thận về việc để lộ các cổng 1 cách công khai - **cổng quản lý** nên được giữ kín. Điều này bao gồm JSON-RPC Node The Graph và các điểm cuối quản lý indexer được trình bày chi tiết bên dưới. #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | - -#### Indexer Service - -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | - -#### Indexer Agent - -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | - -### Setup server infrastructure using Terraform on Google Cloud - -#### Install prerequisites +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CổngMục đíchTuyếnĐối số CLI + Biến
Môi trường +
8000 + GraphQL HTTP server
+ (dành cho các truy vấn subgraph) +
+ /subgraphs/id/... +
+
+ /subgraphs/name/.../... +
--http-port-
8001 + GraphQL WS
(Dành cho đăng ký subgraph) +
+ /subgraphs/id/... +
+
+ /subgraphs/name/.../... +
--ws-port-
8020 + JSON-RPC
+ (để quản lý triển khai) +
/--admin-port-
8030API trạng thái indexing subgraph/graphql--index-node-port-
8040Số liệu Prometheus/metrics-metrics-port-
+
+ +#### Dịch vụ Indexer + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
CổngMục đíchTuyếnĐối số CLIBiến Môi trường
7600 + GraphQL HTTP server
(Dành cho các truy vấn subgraph có trả phí) +
+ /subgraphs/id/... +
+ /status +
+ /channel-messages-inbox +
--portINDEXER_SERVICE_PORT
7300Số liệu Prometheus/metrics--metrics-port-
+
+ +#### Đại lý Indexer + +
+ + + + + + + + + + + + + + + + + + + +
CổngMục đíchTuyếnĐối số CLIBiến Môi trường
8000API quản lý Indexer/--indexer-management-portINDEXER_AGENT_INDEXER_MANAGEMENT_PORT
+
+ +### Thiết lập cơ sở hạ tầng máy chủ bằng Terraform trên Google Cloud + +#### Cài đặt điều kiện tiên quyết - Google Cloud SDK -- Kubectl command line tool +- Công cụ dòng lệnh Kubectl - Terraform -#### Create a Google Cloud Project +#### Tạo một dự án Google Cloud -- Clone or navigate to the indexer repository. +- Sao chép hoặc điều hướng đến kho lưu trữ (repository) của indexer. -- Navigate to the ./terraform directory, this is where all commands should be executed. +- Điều hướng đến thư mục ./terraform, đây là nơi tất cả các lệnh sẽ được thực thi. ```sh -cd terraform +cd địa hình ``` -- Authenticate with Google Cloud and create a new project. +- Xác thực với Google Cloud và tạo một dự án mới. ```sh gcloud auth login @@ -196,9 +380,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Sử dụng \[billing page\](billing page) của Google Cloud Consolde để cho phép thanh toán cho dự án mới. -- Create a Google Cloud configuration. +- Tạo một cấu hình Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +392,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Bật các API Google Cloud được yêu cầu. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +401,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Tạo một tài khoản dịch vụ. ```sh svc_name= @@ -235,7 +419,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Bật tính năng ngang hàng (peering) giữa cơ sở dữ liệu và cụm Kubernetes sẽ được tạo trong bước tiếp theo. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +433,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Tạo tệp cấu hình terraform tối thiểu (cập nhật nếu cần). ```sh indexer= @@ -260,24 +444,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### Sử dụng Terraform để tạo cơ sở hạ tầng -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +Trước khi chạy bất kỳ lệnh nào, hãy đọc qua [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) và tạo một tệp `terraform.tfvars` trong thư mục này (hoặc sửa đổi thư mục chúng ta đã tạo ở bước vừa rồi). Đối với mỗi biến mà bạn muốn ghi đè mặc định hoặc nơi bạn cần đặt giá trị, hãy nhập cài đặt vào `terraform.tfvars`. -- Run the following commands to create the infrastructure. +- Chạy các lệnh sau để tạo cơ sở hạ tầng. ```sh -# Install required plugins +# Cài đặt các Plugins được yêu cầu terraform init -# View plan for resources to be created +# Xem kế hoạch cho các tài nguyên sẽ được tạo terraform plan -# Create the resources (expect it to take up to 30 minutes) +# Tạo tài nguyên (dự kiến mất đến 30 phút) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +Tải xuống thông tin đăng nhập cho cụm mới vào `~/.kube/config` và đặt nó làm ngữ cảnh mặc định của bạn. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +469,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the indexer +#### Tạo các thành phần Kubernetes cho indexer -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- Sao chép thư mục `k8s/overlays` đến một thư mục mới `$dir,` và điều chỉnh `bases` vào trong `$dir/kustomization.yaml` để nó chỉ đến thư mục `k8s/base`. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- Đọc qua tất cả các tệp trong `$dir` và điều chỉnh bất kỳ giá trị nào như được chỉ ra trong nhận xét. -Deploy all resources with `kubectl apply -k $dir`. +Triển khai tất cả các tài nguyên với `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) là một triển khai Rust mã nguồn mở mà sự kiện tạo nguồn cho blockchain Ethereum để cập nhật một cách xác định kho dữ liệu có thể được truy vấn thông qua điểm cuối GraphQL. Các nhà phát triển sử dụng các subgraph để xác định subgraph của họ và một tập hợp các ánh xạ để chuyển đổi dữ liệu có nguồn gốc từ blockchain và Graph Node xử lý việc đồng bộ hóa toàn bộ chain, giám sát các khối mới và phân phát nó qua một điểm cuối GraphQL. -#### Getting started from source +#### Bắt đầu từ nguồn -#### Install prerequisites +#### Cài đặt điều kiện tiên quyết - **Rust** @@ -307,15 +491,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Yêu cầu bổ sung cho người dùng Ubuntu** - Để chạy Graph Node trên Ubuntu, có thể cần một số gói bổ sung. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Cài đặt -1. Start a PostgreSQL database server +1. Khởi động máy chủ cơ sở dữ liệu PostgreSQL ```sh initdb -D .postgres @@ -323,9 +507,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Nhân bản [Graph Node](https://github.com/graphprotocol/graph-node) repo và xây dựng nguồn bằng cách chạy `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Bây giờ tất cả các phụ thuộc đã được thiết lập, hãy khởi động Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +518,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Bắt đầu sử dụng Docker -#### Prerequisites +#### Điều kiện tiên quyết -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Ethereum node** - Theo mặc định, thiết lập soạn thư docker sẽ sử dụng mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) để kết nối với node Ethereum trên máy chủ của bạn. Bạn có thể thay thế tên và url mạng này bằng cách cập nhật `docker-compose.yaml`. -#### Setup +#### Cài đặt -1. Clone Graph Node and navigate to the Docker directory: +1. Nhân bản Graph Node và điều hướng đến thư mục Docker: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: +2. Chỉ dành cho người dùng linux - Sử dụng địa chỉ IP máy chủ thay vì `host.docker.internal` trong `docker-compose.yaml` bằng cách sử dụng tập lệnh bao gồm: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. Bắt đầu một Graph Node cục bộ sẽ kết nối với điểm cuối Ethereum của bạn: ```sh docker-compose up ``` -### Indexer components +### Các thành phần của Indexer -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: +Để tham gia thành công vào mạng này, đòi hỏi sự giám sát và tương tác gần như liên tục, vì vậy chúng tôi đã xây dựng một bộ ứng dụng Typescript để tạo điều kiện cho Indexer tham gia mạng. Có ba thành phần của trình indexer: -- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. +- **Đại ly Indexer** - Đại lý giám sát mạng và cơ sở hạ tầng của chính Indexer và quản lý việc triển khai subgraph nào được lập chỉ mục và phân bổ trên chain và số lượng được phân bổ cho mỗi. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Dịch vụ Indexer** - Thành phần duy nhất cần được hiển thị bên ngoài, dịch vụ chuyển các truy vấn subgraph đến graph node, quản lý các kênh trạng thái cho các khoản thanh toán truy vấn, chia sẻ thông tin ra quyết định quan trọng cho máy khách như các cổng. -- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. +- **Indexer CLI** - Giao diện dòng lệnh để quản lý đại lý indexer. Nó cho phép indexer quản lý các mô hình chi phí và các quy tắc lập chỉ mục. -#### Getting started +#### Bắt đầu -The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! +Đại lý indexer và dịch vụ indexer nên được đặt cùng vị trí với cơ sở hạ tầng Graph Node của bạn. Có nhiều cách để thiết lập môi trường thực thi ảo cho bạn các thành phần của indexer; ở đây chúng tôi sẽ giải thích cách chạy chúng trên baremetal bằng cách sử dụng gói hoặc nguồn NPM hoặc thông qua kubernetes và docker trên Google Cloud Kubernetes Engine. Nếu các ví dụ thiết lập này không được dịch tốt sang cơ sở hạ tầng của bạn, có thể sẽ có một hướng dẫn cộng đồng để tham khảo, hãy tìm hiểu thêm tại [Discord](https://thegraph.com/discord)! Hãy nhớ [stake trong giao thứcl](/indexing#stake-in-the-protocol) trước khi bắt đầu các thành phần indexer của bạn! -#### From NPM packages +#### Từ các gói NPM ```sh npm install -g @graphprotocol/indexer-service @@ -398,17 +582,17 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### Từ nguồn ```sh -# From Repo root directory +# Từ Repo root directory yarn -# Indexer Service +# Dịch vụ Indexer cd packages/indexer-service ./bin/graph-indexer-service start ... -# Indexer agent +# Đại lý Indexer cd packages/indexer-agent ./bin/graph-indexer-service start ... @@ -418,48 +602,48 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Sử dụng docker -- Pull images from the registry +- Kéo hình ảnh từ sổ đăng ký ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +Hoặc xây dựng hình ảnh cục bộ từ nguồn ```sh -# Indexer service +# Dịch vụ Indexer docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-service \ -t indexer-service:latest \ -# Indexer agent +# Đại lý Indexer docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-agent \ -t indexer-agent:latest \ ``` -- Run the components +- Chạy các thành phần ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). +**LƯU Ý**: Sau khi khởi động vùng chứa, dịch vụ indexer sẽ có thể truy cập được tại [http://localhost:7600](http://localhost:7600) và đại lý indexer sẽ hiển thị API quản lý indexer tại [http://localhost:18000/](http://localhost:18000/). -#### Using K8s and Terraform +#### Sử dụng K8s and Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section +Xem phần [Thiết lập Cơ sở Hạ tầng Máy chủ bằng Terraform trên Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Usage +#### Sử dụng -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **LƯU Ý**: Tất cả các biến cấu hình thời gian chạy có thể được áp dụng dưới dạng tham số cho lệnh khi khởi động hoặc sử dụng các biến môi trường của định dạng `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). -#### Indexer agent +#### Đại lý Indexer ```sh graph-indexer-agent start \ @@ -487,7 +671,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Dịch vụ Indexer ```sh SERVER_HOST=localhost \ @@ -515,42 +699,42 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +Indexer CLI là một plugin dành cho [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) có thể truy cập trong terminal tại `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using indexer CLI +#### Quản lý Indexer bằng cách sử dụng indexer CLI -The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. +Đại lý indexer cần đầu vào từ một indexer để tự động tương tác với mạng thay mặt cho indexer. Cơ chế để xác định hành vi của đại lý indexer là **các quy tắc indexing**. Sử dụng **các quy tắc indexing** một indexer có thể áp dụng chiến lược cụ thể của họ để chọn các subgraph để lập chỉ mục và phục vụ các truy vấn. Các quy tắc được quản lý thông qua API GraphQL do đại lý phân phối và được gọi là API Quản lý Indexer. Công cụ được đề xuất để tương tác với **API Quản lý Indexer** là **Indexer CLI**, một extension cho **Graph CLI**. -#### Usage +#### Sử dụng -The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**Indexer CLI** kết nối với đại lý indexer, thường thông qua chuyển tiếp cổng (port-forwarding), vì vậy CLI không cần phải chạy trên cùng một máy chủ hoặc cụm. Để giúp bạn bắt đầu và cung cấp một số ngữ cảnh, CLI sẽ được mô tả ngắn gọn ở đây. -- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Kết nối với API quản lý indexer. Thông thường, kết nối với máy chủ được mở thông qua chuyển tiếp cổng, vì vậy CLI có thể dễ dàng vận hành từ xa. (Ví dụ: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. +- `graph indexer rules get [options] ...]` - Lấy một hoặc nhiều quy tắc indexing bằng cách sử dụng `all` như là `` để lấy tất cả các quy tắc, hoặc `global` để lấy các giá trị mặc định chung. Một đối số bổ sung`--merged` có thể được sử dụng để chỉ định rằng các quy tắc triển khai cụ thể được hợp nhất với quy tắc chung. Đây là cách chúng được áp dụng trong đại lý indexer. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - Đặt một hoặc nhiều quy tắc indexing. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Bắt đầu indexing triển khai subgraph nếu có và đặt `decisionBasis` thành `always`, để đại lý indexer sẽ luôn chọn lập chỉ mục nó. Nếu quy tắc chung được đặt thành luôn thì tất cả các subgraph có sẵn trên mạng sẽ được lập chỉ mục. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - Ngừng indexing triển khai và đặt `decisionBasis` không bao giờ, vì vậy nó sẽ bỏ qua triển khai này khi quyết định triển khai để lập chỉ mục. -- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — Đặt `thedecisionBasis` cho một triển khai thành `rules`, để đại lý indexer sẽ sử dụng các quy tắc indexing để quyết định có index việc triển khai này hay không. -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Tất cả các lệnh hiển thị quy tắc trong đầu ra có thể chọn giữa các định dạng đầu ra được hỗ trợ (`table`, `yaml`, and `json`) bằng việc sử dụng đối số `-output`. -#### Indexing rules +#### Các quy tắc indexing -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Các quy tắc Indexing có thể được áp dụng làm mặc định chung hoặc cho các triển khai subgraph cụ thể bằng cách sử dụng ID của chúng. Các trường `deployment` và `decisionBasis` là bắt buộc, trong khi tất cả các trường khác là tùy chọn. Khi quy tắc lập chỉ mục có `rules` như là `decisionBasis`, thì đại lý indexer sẽ so sánh các giá trị ngưỡng không null trên quy tắc đó với các giá trị được tìm nạp từ mạng để triển khai tương ứng. Nếu triển khai subgraph có các giá trị trên (hoặc thấp hơn) bất kỳ ngưỡng nào thì nó sẽ được chọn để index. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Ví dụ: nếu quy tắc chung có `minStake` của **5** (GRT), bất kỳ triển khai subgraph nào có hơn 5 (GRT) stake được phân bổ cho nó sẽ được index. Các quy tắc ngưỡng bao gồm `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, và `minAverageQueryFees`. -Data model: +Mô hình dữ liệu: ```graphql type IndexingRule { @@ -573,98 +757,117 @@ IndexingDecisionBasis { } ``` -#### Cost models +#### Các mô hình chi phí -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. +Mô hình chi phí cung cấp định giá động cho các truy vấn dựa trên thuộc tính thị trường và truy vấn. Dịch vụ Indexer chia sẻ mô hình chi phí với các cổng cho mỗi subgraph mà chúng dự định phản hồi các truy vấn. Đến lượt mình, các cổng sử dụng mô hình chi phí để đưa ra quyết định lựa chọn indexer cho mỗi truy vấn và để thương lượng thanh toán với những indexer đã chọn. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Ngôn ngữ Agora cung cấp một định dạng linh hoạt để khai báo các mô hình chi phí cho các truy vấn. Mô hình giá Agora là một chuỗi các câu lệnh thực thi theo thứ tự cho mỗi truy vấn cấp cao nhất trong một truy vấn GraphQL. Đối với mỗi truy vấn cấp cao nhất, câu lệnh đầu tiên phù hợp với nó xác định giá cho truy vấn đó. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Một câu lệnh bao gồm một vị từ (predicate), được sử dụng để đối sánh các truy vấn GraphQL và một biểu thức chi phí mà khi được đánh giá sẽ xuất ra chi phí ở dạng GRT thập phân. Các giá trị ở vị trí đối số được đặt tên của một truy vấn có thể được ghi lại trong vị từ và được sử dụng trong biểu thức. Các Globals có thể được đặt và thay thế cho các phần giữ chỗ trong một biểu thức. -Example cost model: +Mô hình chi phí mẫu: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Câu lệnh này ghi lại giá trị bỏ qua (skip), +# sử dụng biểu thức boolean trong vị từ để khớp với các truy vấn cụ thể sử dụng `skip` +# và một biểu thức chi phí để tính toán chi phí dựa trên giá trị `skip` và SYSTEM_LOAD global query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Mặc định này sẽ khớp với bất kỳ biểu thức GraphQL nào. +# Nó sử dụng một Global được thay thế vào biểu thức để tính toán chi phí default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: - -| Query | Price | -| ---------------------------------------------------------------------------- | ------- | -| { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | -| { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | - -#### Applying the cost model - -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Ví dụ truy vấn chi phí bằng cách sử dụng mô hình trên: + +
+ + + + + + + + + + + + + + + + + + + + + +
Truy vấnGiá
{ pairs(skip: 5000) { id } }0.5 GRT
{ tokens { symbol } }0.1 GRT
{ pairs(skip: 5000) { id { tokens } symbol } }0.6 GRT
+
+ +#### Áp dụng mô hình chi phí + +Các mô hình chi phí được áp dụng thông qua Indexer CLI, chuyển chúng đến API Quản lý Indexer của đại lý indexer để lưu trữ trong cơ sở dữ liệu. Sau đó, Dịch vụ Indexer sẽ nhận chúng và cung cấp các mô hình chi phí tới các cổng bất cứ khi nào họ yêu cầu. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Tương tác với mạng -### Stake in the protocol +### Stake trong giao thức -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ +Các bước đầu tiên để tham gia vào mạng với tư cách là Indexer là phê duyệt giao thức, stake tiền và (tùy chọn) thiết lập địa chỉ operator cho các tương tác giao thức hàng ngày. _ **Lưu ý**: Đối với các mục đích của các hướng dẫn này, Remix sẽ được sử dụng để tương tác hợp đồng, nhưng hãy thoải mái sử dụng công cụ bạn chọn ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), và [MyCrypto](https://www.mycrypto.com/account) là một vài công cụ được biết đến khác)._ -Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. +Khi một indexer đã stake GRT vào giao thức, [các thành phần indexer](/indexing#indexer-components) có thể được khởi động và bắt đầu tương tác của chúng với mạng. -#### Approve tokens +#### Phê duyệt các token -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Mở [Remix app](https://remix.ethereum.org/) trong một trình duyệt -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. Trong `File Explorer` tạo một tệp tên **GraphToken.abi** với [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. +3. Với `GraphToken.abi` đã chọn và mở trong trình chỉnh sửa, chuyển sang Deploy (Triển khai) và `Run Transactions` trong giao diện Remix. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. Trong môi trường (environment) chọn `Injected Web3` và trong `Account` chọn địa chỉ indexer của bạn. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Đặt địa chỉ hợp đồng GraphToken - Dán địa chỉ hợp đồng GraphToken(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) kế bên `At Address` và nhấp vào nút `At address` để áp dụng. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Gọi chức năng `approve(spender, amount)` để phê duyệt hợp đồng Staking. Điền phần `spender` bằng địa chỉ hợp đồng Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) và điền `amount` bằng số token để stake (tính bằng wei). -#### Stake tokens +#### Stake các token -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Mở [Remix app](https://remix.ethereum.org/) trong một trình duyệt -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. Trong `File Explorer` tạo một tệp tene **Staking.abi** với Staking ABI. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. +3. Với `Staking.abi` đã chọn và mở trong trình chỉnh sửa, chuyển sang `Deploy` và `Run Transactions` trong giao diện Remix. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. Trong môi trường (environment) chọn `Injected Web3` và trong `Account` chọn địa chỉ indexer của bạn. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Đặt địa chỉ hợp đồng Staking - Dán địa chỉ hợp đồng Staking (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) kế bên `At Address` và nhấp vào nút `At address` để áp dụng. -6. Call `stake()` to stake GRT in the protocol. +6. Gọi lệnh `stake()` để stake GRT vào giao thức. -7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Tùy chọn) Indexer có thể chấp thuận một địa chỉ khác làm operator cho cơ sở hạ tầng indexer của họ để tách các khóa kiểm soát tiền khỏi những khóa đang thực hiện các hành động hàng ngày như phân bổ trên các subgraph và phục vụ các truy vấn (có trả tiền). Để đặt operator, hãy gọi lệnh `setOperator()` với địa chỉ operator. -8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Tùy chọn) Để kiểm soát việc phân phối phần thưởng và thu hút delegator một cách chiến lược, indexer có thể cập nhật thông số ủy quyền của họ bằng cách cập nhật indexingRewardCut (phần triệu), queryFeeCut (phần triệu) và cooldownBlocks (số khối). Để làm như vậy, hãy gọi `setDelegationParameters()`. Ví dụ sau đặt queryFeeCut phân phối 95% hoàn phí truy vấn cho indexer và 5% cho delegator, đặt indexingRewardCutto phân phối 60% phần thưởng indexing cho indexer và 40% cho delegator và đặt `thecooldownBlocks` chu kỳ đến 500 khối. ``` setDelegationParameters(950000, 600000, 500) ``` -### The life of an allocation +### Tuổi thọ của một phân bổ -After being created by an indexer a healthy allocation goes through four states. +Sau khi được tạo bởi một indexer, một phân bổ lành mạnh sẽ trải qua bốn trạng thái. -- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. +- **Đang hoạt động** - Sau khi phân bổ được tạo trên chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) nó được xem là **đang hoạt động**. Một phần stake của chính indexer và/hoặc stake được ủy quyền được phân bổ cho việc triển khai subgraph, cho phép họ yêu cầu phần thưởng indexing và phục vụ các truy vấn cho việc triển khai subgraph đó. Đại lý indexer quản lý việc tạo phân bổ dựa trên các quy tắc của indexer. -- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). +- **Đã đóng** - Một indexer có thể tự do đóng phân bổ sau khi 1 epoch ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) hoặc đại lý indexer của họ sẽ tự động đóng phân bổ sau **maxAllocationEpochs** (hiện tại 28 ngày). Khi kết thúc phân bổ với bằng chứng hợp lệ về proof of indexing (POI), phần thưởng indexing của họ sẽ được phân phối cho indexer và những delegator của nó (xem "phần thưởng được phân phối như thế nào?" Bên dưới để tìm hiểu thêm). -- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. +- **Hoàn thiện** - Sau khi phân bổ đã bị đóng, sẽ có một khoảng thời gian tranh chấp mà sau đó phân bổ được xem xét là **hoàn thiện** và nó có sẵn các khoản hoàn lại phí truy vấn khả dụng để được yêu cầu (claim()). Đại lý indexer giám sát mạng để phát hiện các phân bổ **hoàn thiện** yêu cầu chúng nếu chúng vượt quá ngưỡng có thể định cấu hình (và tùy chọn), **—-allocation-claim-threshold**. -- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. +- **Đã yêu cầu** - Trạng thái cuối cùng của một phân bổ; nó đã chạy quá trình của nó dưới dạng phân bổ đang hoạt động, tất cả các phần thưởng đủ điều kiện đã được phân phối và các khoản bồi hoàn phí truy vấn của nó đã được yêu cầu.