From ec3e2c9073ad2d5b1b864bdb9921e2896c68a125 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:07 -0500 Subject: [PATCH 001/241] New translations introduction.mdx (Spanish) --- pages/es/about/introduction.mdx | 48 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/es/about/introduction.mdx b/pages/es/about/introduction.mdx index 70290d8c3649..5f840c040400 100644 --- a/pages/es/about/introduction.mdx +++ b/pages/es/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: Introducción +title: Introduction --- -En esta página se explica qué es The Graph y cómo puedes empezar a utilizarlo. +This page will explain what The Graph is and how you can get started. -## ¿Qué es The Graph? +## What The Graph Is -The Graph es un protocolo descentralizado que permite indexar y consultar los datos de diferentes blockchains, el cual empezó por Ethereum. Permite consultar datos los cuales pueden ser difíciles de consultar directamente. +The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. -Los proyectos con contratos inteligentes complejos como [Uniswap](https://uniswap.org/) y las iniciativas de NFTs como [Bored Ape Yacht Club](https://boredapeyachtclub.com/) almacenan los datos en la blockchain de Ethereum, lo que hace realmente difícil leer algo más que los datos básicos directamente desde la blockchain. +Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. -En el caso de Bored Ape Yacht Club, podemos realizar operaciones de lecturas básicas en [su contrato](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code), para obtener el propietario de un determinado Ape, obtener el URI de un Ape en base a su ID, o el supply total, ya que estas operaciones de lectura están programadas directamente en el contrato inteligente, pero no son posibles las consultas y operaciones más avanzadas del mundo real como la adición, consultas, las relaciones y el filtrado no trivial. Por ejemplo, si quisiéramos consultar los Apes que son propiedad de una dirección en concreto, y filtrar por una de sus características, no podríamos obtener esa información interactuando directamente con el contrato. +In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. -Para obtener estos datos, tendríamos que procesar cada uno de los eventos de [`transferencia`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) que se hayan emitido, leer los metadatos de IPFS utilizando el ID del token y el hash del IPFS, con el fin de luego agregarlos. Incluso para este tipo de preguntas relativamente sencillas, una aplicación descentralizada (dapp) que se ejecutara en un navegador tardaría **horas o incluso días** en obtener una respuesta. +To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. -También podrías construir tu propio servidor, procesar las transacciones allí, guardarlas en una base de datos y construir un endpoint de la API sobre todo ello para consultar los datos. Sin embargo, esta opción requiere recursos intensivos, necesita mantenimiento, y si llegase a presentar algún tipo de fallo podría incluso vulnerar algunos protocolos de seguridad que son necesarios para la descentralización. +You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. -**Indexar los datos de la blockchain es muy, muy difícil.** +**Indexing blockchain data is really, really hard.** -Las propiedades de la blockchain, su finalidad, la reorganización de la cadena o los bloques que están por cerrarse, complican aún más este proceso y hacen que no solo se consuma tiempo, sino que sea conceptualmente difícil recuperar los resultados correctos proporcionados por la blockchain. +Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. -The Graph resuelve esto con un protocolo descentralizado que indexa y permite una consulta eficiente y de alto rendimiento para recibir los datos de la blockchain. Estas APIs ("subgrafos" indexados) pueden consultarse después con una API de GraphQL estándar. Actualmente, existe un servicio alojado (hosted) y un protocolo descentralizado con las mismas capacidades. Ambos están respaldados por la implementación de código abierto de [Graph Node](https://github.com/graphprotocol/graph-node). +The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). -## ¿Cómo funciona The Graph? +## How The Graph Works -The Graph aprende, qué y cómo indexar los datos de Ethereum, basándose en las descripciones de los subgrafos, conocidas como el manifiesto de los subgrafos. La descripción del subgrafo define los contratos inteligentes de interés para este subgrafo, los eventos en esos contratos a los que prestar atención, y cómo mapear los datos de los eventos a los datos que The Graph almacenará en su base de datos. +The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. -Una vez que has escrito el `subgraph manifest`, utilizas el CLI de The Graph para almacenar la definición en IPFS y decirle al indexador que empiece a indexar los datos de ese subgrafo. +Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. -Este diagrama ofrece más detalles sobre el flujo de datos una vez que se ha desplegado en el manifiesto para un subgrafo, que trata de las transacciones en Ethereum: +This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: ![](/img/graph-dataflow.png) -El flujo sigue estos pasos: +The flow follows these steps: -1. Una aplicación descentralizada añade datos a Ethereum a través de una transacción en un contrato inteligente. -2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. -3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de su subgrafo que puedan contener. -4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. -5. La aplicación descentralizada consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La aplicación descentralizada muestra estos datos en una interfaz muy completa para el usuario, a fin de que los cliente que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. Y así... el ciclo se repite continuamente. +1. A decentralized application adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. -## Próximos puntos +## Next Steps -En las siguientes secciones entraremos en más detalles sobre cómo definir subgrafos, cómo desplegarlos y cómo consultar los datos de los índices que construye el Graph Node. +In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. -Antes de que empieces a escribir tu propio subgrafo, es posible que debas echar un vistazo a The Graph Explorer para explorar algunos de los subgrafos que ya han sido desplegados. La página de cada subgrafo contiene un playground que te permite consultar los datos de ese subgrafo usando GraphQL. +Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. From 26e6d71877fa02ac4e4ab6b078186c472db3e12e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:08 -0500 Subject: [PATCH 002/241] New translations deprecating-a-subgraph.mdx (Arabic) --- pages/ar/developer/deprecating-a-subgraph.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ar/developer/deprecating-a-subgraph.mdx b/pages/ar/developer/deprecating-a-subgraph.mdx index 2d83064709da..f8966e025c13 100644 --- a/pages/ar/developer/deprecating-a-subgraph.mdx +++ b/pages/ar/developer/deprecating-a-subgraph.mdx @@ -1,17 +1,17 @@ --- -title: إهمال Subgraph +title: Deprecating a Subgraph --- -إن كنت ترغب في إهمال الـ subgraph الخاص بك في The Graph Explorer. فأنت في المكان المناسب! اتبع الخطوات أدناه: +So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: -1. قم بزيارة عنوان العقد [ هنا ](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) -2. استدعِ "devecateSubgraph" بعنوانك الخاص كأول بارامتر -3. في حقل "subgraphNumber" ، قم بإدراج 0 إذا كان أول subgraph تنشره ، 1 إذا كان الثاني ، 2 إذا كان الثالث ، إلخ. -4. يمكن العثور على مدخلات # 2 و # 3 في `` الخاص بك والذي يتكون من `{graphAccount}-{subgraphNumber}`. فمثلا، [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID هو `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`,وهو مزيج من `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` و `subgraphNumber` = `<0>` -5. هاهو! لن يظهر الـ subgraph بعد الآن في عمليات البحث في The Graph Explorer. يرجى ملاحظة ما يلي: +1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) +2. Call 'deprecateSubgraph' with your own address as the first parameter +3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. +4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` +5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: -- لن يتمكن المنسقون من الإشارة على الـ subgraph بعد الآن -- سيتمكن المنشقون الذين قد أشاروا شابقا على الـ subgraph من سحب إشاراتهم بمتوسط سعر السهم -- ستتم تحديد الـ subgraphs المهملة برسالة خطأ. +- Curators will not be able to signal on the subgraph anymore +- Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price +- Deprecated subgraphs will be indicated with an error message. -إذا تفاعلت مع الـ subgraph المهمل ، فستتمكن من العثور عليه في ملف تعريف المستخدم الخاص بك ضمن علامة التبويب "Subgraphs" أو "Indexing" أو "Curating" على التوالي. +If you interacted with the now deprecated subgraph, you'll be able to find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab respectively. From 1dd4ae2e0ef79b983ab3201d252afae90d4499c4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:09 -0500 Subject: [PATCH 003/241] New translations create-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/create-subgraph-hosted.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/developer/create-subgraph-hosted.mdx b/pages/zh/developer/create-subgraph-hosted.mdx index 6b235e379634..1dee95ffac20 100644 --- a/pages/zh/developer/create-subgraph-hosted.mdx +++ b/pages/zh/developer/create-subgraph-hosted.mdx @@ -6,7 +6,7 @@ Before being able to use the Graph CLI, you need to create your subgraph in [Sub The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. -## Supported Networks +## 支持的网络 The Graph Network supports subgraphs indexing mainnet Ethereum: From 170a0a4a7707abfb18896b14a374c62adb7a578f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:11 -0500 Subject: [PATCH 004/241] New translations define-subgraph-hosted.mdx (Spanish) --- pages/es/developer/define-subgraph-hosted.mdx | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/es/developer/define-subgraph-hosted.mdx b/pages/es/developer/define-subgraph-hosted.mdx index 64011dddac02..92bf5bd8cd2f 100644 --- a/pages/es/developer/define-subgraph-hosted.mdx +++ b/pages/es/developer/define-subgraph-hosted.mdx @@ -1,34 +1,34 @@ --- -title: Definir un Subgrafo +title: Define a Subgraph --- -Un subgrafo define los datos que The Graph indexará de Ethereum, y cómo los almacenará. Una vez desplegado, formará parte de un gráfico global de datos de la blockchain. +A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. -![Definir un Subgrafo](/img/define-subgraph.png) +![Define a Subgraph](/img/define-subgraph.png) -La definición del subgrafo consta de unos cuantos archivos: +The subgraph definition consists of a few files: -- `subgraph.yaml`: un archivo YAML que contiene el manifiesto del subgrafo +- `subgraph.yaml`: a YAML file containing the subgraph manifest -- `schema.graphql`: un esquema GraphQL que define qué datos se almacenan para su subgrafo, y cómo consultarlos a través de GraphQL +- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) codigo que traduce de los datos del evento a las entidades definidas en su esquema (por ejemplo `mapping.ts` en este tutorial) +- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) -Antes de entrar en detalles sobre el contenido del archivo de manifiesto, es necesario instalar el [Graph CLI](https://github.com/graphprotocol/graph-cli) que necesitarás para construir y desplegar un subgrafo. +Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) which you will need to build and deploy a subgraph. -## Instalar The Graph CLI +## Install the Graph CLI -The Graph CLI está escrito en JavaScript, y tendrás que instalar `yarn` o `npm` para utilizarlo; se supone que tienes yarn en lo que sigue. +The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. -Una vez que tengas `yarn`, instala The Graph CLI ejecutando +Once you have `yarn`, install the Graph CLI by running -**Instalar con yarn:** +**Install with yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Instalar con npm:** +**Install with npm:** ```bash npm install -g @graphprotocol/graph-cli From dea722e5310b01f941023bdbefd6313fbba15aba Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:12 -0500 Subject: [PATCH 005/241] New translations define-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/define-subgraph-hosted.mdx | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/pages/ar/developer/define-subgraph-hosted.mdx b/pages/ar/developer/define-subgraph-hosted.mdx index c4ec3a65d2e0..92bf5bd8cd2f 100644 --- a/pages/ar/developer/define-subgraph-hosted.mdx +++ b/pages/ar/developer/define-subgraph-hosted.mdx @@ -1,34 +1,34 @@ --- -title: تعريف Subgraph +title: Define a Subgraph --- -يحدد ال Subgraph البيانات التي سيقوم TheGraph بفهرستها من الايثيريوم ، وكيف سيتم تخزينها. بمجرد نشرها ، ستشكل جزءا من رسم graph عالمي لبيانات blockchain. +A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. -![تعريف Subgraph](/img/define-subgraph.png) +![Define a Subgraph](/img/define-subgraph.png) -يتكون تعريف Subgraph من عدة ملفات: +The subgraph definition consists of a few files: -- `Subgraph.yaml `ملف YAML يحتوي على Subgraph manifest +- `subgraph.yaml`: a YAML file containing the subgraph manifest -- ` schema.graphql `: مخطط GraphQL يحدد البيانات المخزنة في Subgraph وكيفية الاستعلام عنها عبر GraphQL +- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL - `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) -قبل الخوض في التفاصيل حول محتويات ملف manifest ، تحتاج إلى تثبيت [Graph CLI](https://github.com/graphprotocol/graph-cli) والذي سوف تحتاجه لبناء ونشر Subgraph. +Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) which you will need to build and deploy a subgraph. -## قم بتثبيت Graph CLI +## Install the Graph CLI -تمت كتابة Graph CLI بلغة JavaScript ، وستحتاج إلى تثبيتها أيضًا `yarn` or `npm` لتستخدمها؛ من المفترض أن يكون لديك yarn فيما يلي. +The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. -بمجرد حصولك على ` yarn ` ، قم بتثبيت Graph CLI عن طريق التشغيل +Once you have `yarn`, install the Graph CLI by running -**التثبيت بواسطة yarn:** +**Install with yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**التثبيت بواسطة npm:** +**Install with npm:** ```bash npm install -g @graphprotocol/graph-cli From 85772276b7fe5e730779b97a3aa13f3629f766fa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:14 -0500 Subject: [PATCH 006/241] New translations define-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/define-subgraph-hosted.mdx | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/zh/developer/define-subgraph-hosted.mdx b/pages/zh/developer/define-subgraph-hosted.mdx index 17484f0deb7a..92bf5bd8cd2f 100644 --- a/pages/zh/developer/define-subgraph-hosted.mdx +++ b/pages/zh/developer/define-subgraph-hosted.mdx @@ -1,34 +1,34 @@ --- -title: 定义子图 +title: Define a Subgraph --- -子图定义了Graph从以太坊索引哪些数据,以及如何存储这些数据。 子图一旦部署,就成为区块链数据全局图的一部分。 +A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. -![定义子图](/img/define-subgraph.png) +![Define a Subgraph](/img/define-subgraph.png) -子图定义由几个文件组成: +The subgraph definition consists of a few files: -- `subgraph.yaml`: 包含子图清单的 YAML 文件 +- `subgraph.yaml`: a YAML file containing the subgraph manifest -- `schema.graphql`: 一个 GraphQL 模式文件,它定义了为您的子图存储哪些数据,以及如何通过 GraphQL 查询这些数据 +- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL -- `AssemblyScript映射`: 将事件数据转换为模式中定义的实体(例如本教程中的`mapping.ts`)的 [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) 代码 +- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) -在详细了解清单文件的内容之前,您需要安装[Graph CLI](https://github.com/graphprotocol/graph-cli),以构建和部署子图。 +Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) which you will need to build and deploy a subgraph. -## 安装Graph CLI +## Install the Graph CLI -Graph CLI是使用 JavaScript 编写的,您需要安装`yarn`或 `npm`才能使用它;以下教程中假设您已经安装了yarn。 +The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. -一旦您安装了`yarn`,可以通过运行以下命令安装 Graph CLI +Once you have `yarn`, install the Graph CLI by running -**使用yarn安装:** +**Install with yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**使用npm安装:** +**Install with npm:** ```bash npm install -g @graphprotocol/graph-cli From 0ef16713853aa7d132e8b68ed21dbbb319591d60 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:16 -0500 Subject: [PATCH 007/241] New translations deprecating-a-subgraph.mdx (Spanish) --- pages/es/developer/deprecating-a-subgraph.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/es/developer/deprecating-a-subgraph.mdx b/pages/es/developer/deprecating-a-subgraph.mdx index 28448746b0a5..f8966e025c13 100644 --- a/pages/es/developer/deprecating-a-subgraph.mdx +++ b/pages/es/developer/deprecating-a-subgraph.mdx @@ -1,17 +1,17 @@ --- -title: Deprecar un Subgrafo +title: Deprecating a Subgraph --- -Así que te gustaría deprecar tu subgrafo en The Graph Explorer. Has venido al lugar adecuado! Sigue los siguientes pasos: +So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: -1. Visita el address del contrato [aquí](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) -2. Llama a 'deprecateSubgraph' con tu propia dirección como primer parámetro -3. En el campo 'subgraphNumber', anota 0 si es el primer subgrafo que publicas, 1 si es el segundo, 2 si es el tercero, etc. -4. Las entradas para #2 y #3 se pueden encontrar en tu `` que está compuesto por `{graphAccount}-{subgraphNumber}`. Por ejemplo, el [Subgrafo de Sushi](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, que es una combinación de `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` y `subgraphNumber` = `<0>` -5. Voila! Tu subgrafo ya no aparecerá en las búsquedas en The Graph Explorer. Ten en cuenta lo siguiente: +1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) +2. Call 'deprecateSubgraph' with your own address as the first parameter +3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. +4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` +5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: -- Los curadores ya no podrán señalar en el subgrafo -- Los curadores que ya hayan señalado en el subgrafo podrán retirar su señal a un precio promedio de la participación -- Los subgrafos deprecados se indicarán con un mensaje de error. +- Curators will not be able to signal on the subgraph anymore +- Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price +- Deprecated subgraphs will be indicated with an error message. -Si interactuaste con el ahora subgrafo deprecado, podrás encontrarlo en tu perfil de usuario en la pestaña "Subgraphs", "Indexing" o "Curating" respectivamente. +If you interacted with the now deprecated subgraph, you'll be able to find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab respectively. From ebf87cc79031abdd9b68323457c5d90caccf59f0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:20 -0500 Subject: [PATCH 008/241] New translations developer-faq.mdx (Spanish) --- pages/es/developer/developer-faq.mdx | 120 +++++++++++++-------------- 1 file changed, 60 insertions(+), 60 deletions(-) diff --git a/pages/es/developer/developer-faq.mdx b/pages/es/developer/developer-faq.mdx index ed6de912d75e..41449c60e5ab 100644 --- a/pages/es/developer/developer-faq.mdx +++ b/pages/es/developer/developer-faq.mdx @@ -1,70 +1,70 @@ --- -title: Preguntas Frecuentes de los Desarrolladores +title: Developer FAQs --- -### 1. ¿Puedo eliminar mi subgrafo? +### 1. Can I delete my subgraph? -No es posible eliminar los subgrafos una vez creados. +It is not possible to delete subgraphs once they are created. -### 2. ¿Puedo cambiar el nombre de mi subgrafo? +### 2. Can I change my subgraph name? -No. Una vez creado un subgrafo, no se puede cambiar el nombre. Asegúrate de pensar en esto cuidadosamente antes de crear tu subgrafo para que sea fácilmente buscable e identificable por otras dapps. +No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. -### 3. ¿Puedo cambiar la cuenta de GitHub asociada a mi subgrafo? +### 3. Can I change the GitHub account associated with my subgraph? -No. Una vez creado un subgrafo, la cuenta de GitHub asociada no puede ser modificada. Asegúrate de pensarlo bien antes de crear tu subgrafo. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. -### 4. ¿Puedo crear un subgrafo si mis contratos inteligentes no tienen eventos? +### 4. Am I still able to create a subgraph if my smart contracts don't have events? -Es muy recomendable que estructures tus contratos inteligentes para tener eventos asociados a los datos que te interesa consultar. Los handlers de eventos en el subgrafo son activados por los eventos de los contratos, y son, con mucho, la forma más rápida de recuperar datos útiles. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. -Si los contratos con los que trabajas no contienen eventos, tu subgrafo puede utilizar handlers de llamadas y bloques para activar la indexación. Aunque esto no se recomienda, ya que el rendimiento será significativamente más lento. +If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. -### 5. ¿Es posible desplegar un subgrafo con el mismo nombre para varias redes? +### 5. Is it possible to deploy one subgraph with the same name for multiple networks? -Necesitarás nombres distintos para varias redes. Aunque no se pueden tener diferentes subgrafos bajo el mismo nombre, hay formas convenientes de tener una sola base de código para múltiples redes. Encontrará más información al respecto en nuestra documentación: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. ¿En qué se diferencian las plantillas de las fuentes de datos? +### 6. How are templates different from data sources? -Las plantillas permiten crear fuentes de datos sobre la marcha, mientras el subgrafo se indexa. Puede darse el caso de que tu contrato genere nuevos contratos a medida que la gente interactúe con él, y dado que conoces la forma de esos contratos (ABI, eventos, etc) por adelantado, puedes definir cómo quieres indexarlos en una plantilla y, cuando se generen, tu subgrafo creará una fuente de datos dinámica proporcionando la dirección del contrato. +Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. -Consulta la sección "Instalar un modelo de fuente de datos" en: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). +Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). -### 7. ¿Cómo puedo asegurarme de que estoy utilizando la última versión de graph-node para mis despliegues locales? +### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? -Puede ejecutar el siguiente comando: +You can run the following command: ```sh docker pull graphprotocol/graph-node:latest ``` -**NOTA:** docker / docker-compose siempre utilizará la versión de graph-node que se sacó la primera vez que se ejecutó, por lo que es importante hacer esto para asegurarse de que estás al día con la última versión de graph-node. +**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. -### 8. ¿Cómo puedo llamar a una función de contrato o acceder a una variable de estado pública desde mis mapeos de subgrafos? +### 8. How do I call a contract function or access a public state variable from my subgraph mappings? -Echa un vistazo al estado `Access to smart contract` dentro de la sección [AssemblyScript API](/developer/assemblyscript-api). +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). -### 9. ¿Es posible configurar un subgrafo usando `graph init` desde `graph-cli` con dos contratos? ¿O debo añadir manualmente otra fuente de datos en `subgraph.yaml` después de ejecutar `graph init`? +### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? -Lamentablemente, esto no es posible en la actualidad. `graph init` está pensado como un punto de partida básico, a partir del cual puedes añadir más fuentes de datos manualmente. +Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. -### 10. Quiero contribuir o agregar una cuestión en GitHub, ¿dónde puedo encontrar los repositorios de código abierto? +### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. ¿Cuál es la forma recomendada de construir ids "autogenerados" para una entidad cuando se manejan eventos? +### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? -Si sólo se crea una entidad durante el evento y si no hay nada mejor disponible, entonces el hash de la transacción + el índice del registro serían únicos. Puedes ofuscar esto convirtiendo eso en Bytes y luego pasándolo por `crypto.keccak256` pero esto no lo hará más único. +If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 12. Cuando se escuchan varios contratos, ¿es posible seleccionar el orden de los contratos para escuchar los eventos? +### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 13. ¿Es posible diferenciar entre redes (mainnet, Kovan, Ropsten, local) desde los handlers de eventos? +### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? -Sí. Puedes hacerlo importando `graph-ts` como en el ejemplo siguiente: +Yes. You can do this by importing `graph-ts` as per the example below: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,39 +73,39 @@ dataSource.network() dataSource.address() ``` -### 14. ¿Apoyan el bloqueo y los handlers de llamadas en Rinkeby? +### 14. Do you support block and call handlers on Rinkeby? -En Rinkeby apoyamos los handlers de bloque, pero sin `filter: call`. Los handlers de llamadas no son compatibles por el momento. +On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. -### 15. ¿Puedo importar ethers.js u otras bibliotecas JS en mis mapeos de subgrafos? +### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? -Actualmente no, ya que los mapeos se escriben en AssemblyScript. Una posible solución alternativa a esto es almacenar los datos en bruto en entidades y realizar la lógica que requiere las bibliotecas JS en el cliente. +Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. -### 16. ¿Es posible especificar en qué bloque se inicia la indexación? +### 16. Is it possible to specifying what block to start indexing on? -Sí. `dataSources.source.startBlock` en el `subgraph.yaml` especifica el número del bloque a partir del cual la fuente de datos comienza a indexar. En la mayoría de los casos, sugerimos utilizar el bloque en el que se creó el contrato: Bloques de inicio +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks -### 17. ¿Hay algunos consejos para aumentar el rendimiento de la indexación? Mi subgrafo está tardando mucho en sincronizarse. +### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. -Sí, deberías echar un vistazo a la función opcional de inicio de bloque para comenzar la indexación desde el bloque en el que se desplegó el contrato: [Start blocks](/developer/create-subgraph-hosted#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) -### 18. ¿Hay alguna forma de consultar directamente el subgrafo para determinar cuál es el último número de bloque que ha indexado? +### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? -¡Sí! Prueba el siguiente comando, sustituyendo "organization/subgraphName" por la organización bajo la que se publica y el nombre de tu subgrafo: +Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. ¿Qué redes son compatibles con The Graph? +### 19. What networks are supported by The Graph? -The Graph Node admite cualquier cadena de API JSON RPC compatible con EVM. +The graph-node supports any EVM-compatible JSON RPC API chain. -The Graph Network admite subgrafos que indexan la red principal de Ethereum: +The Graph Network supports subgraphs indexing mainnet Ethereum: - `mainnet` -En el Servicio Alojado, se admiten las siguientes redes: +In the Hosted Service, the following networks are supported: - Ethereum mainnet - Kovan @@ -133,40 +133,40 @@ En el Servicio Alojado, se admiten las siguientes redes: - Optimism - Optimism Testnet (on Kovan) -Se está trabajando en la integración de otras blockchains, puedes leer más en nuestro repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). +There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). -### 20. ¿Es posible duplicar un subgrupo en otra cuenta o endpoint sin volver a desplegarlo? +### 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Tienes que volver a desplegar el subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 21. ¿Es posible utilizar Apollo Federation sobre graph-node? +### 21. Is this possible to use Apollo Federation on top of graph-node? -Federation aún no es compatible, aunque queremos apoyarla en el futuro. Por el momento, algo que se puede hacer es utilizar el stitching de esquemas, ya sea en el cliente o a través de un servicio proxy. +Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. -### 22. ¿Existe un límite en el número de objetos que The Graph puede devolver por consulta? +### 22. Is there a limit to how many objects The Graph can return per query? -Por defecto, las respuestas a las consultas están limitadas a 100 elementos por colección. Si quieres recibir más, puedes llegar hasta 1000 artículos por colección y más allá puedes paginar con: +By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. Si mi dapp frontend utiliza The Graph para la consulta, ¿tengo que escribir mi clave de consulta en el frontend directamente? Si pagamos tasas de consulta a los usuarios, ¿los usuarios malintencionados harán que nuestras tasas de consulta sean muy altas? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Actualmente, el enfoque recomendado para una dapp es añadir la clave al frontend y exponerla a los usuarios finales. Dicho esto, puedes limitar esa clave a un nombre de host, como _yourdapp.io_ y subgrafo. El gateway está siendo gestionado actualmente por Edge & Node. Parte de la responsabilidad de un gateway es vigilar los comportamientos abusivos y bloquear el tráfico de los clientes maliciosos. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a host name, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -### 24. ¿Dónde puedo encontrar mi subgrafo actual en el Servicio Alojado? +### 24. Where do I go to find my current subgraph on the Hosted Service? -Dirígete al Servicio Alojado para encontrar los subgrafos que tú u otros desplegaron en el Servicio Alojado. Puedes encontrarlo [aquí.](https://thegraph.com/hosted-service) +Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) -### 25. ¿Comenzará el Servicio Alojado a cobrar tasas de consulta? +### 25. Will the Hosted Service start charging query fees? -The Graph nunca cobrará por el Servicio Alojado. The Graph es un protocolo descentralizado, y cobrar por un servicio centralizado no está alineado con los valores de The Graph. El Servicio Alojado siempre fue un paso temporal para ayudar a llegar a la red descentralizada. Los desarrolladores dispondrán de tiempo suficiente para migrar a la red descentralizada a medida que se sientan cómodos. +The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. -### 26. ¿Cuándo se cerrará el Servicio Alojado? +### 26. When will the Hosted Service be shut down? -Si y cuando se planee hacer esto, se notificará a la comunidad con suficiente antelación y se tendrán en cuenta los subgrafos construidos en el Servicio Alojado. +If and when there are plans to do this, the community will be notified well ahead of time with considerations made for any subgraphs built on the Hosted Service. -### 27. ¿Cómo puedo actualizar un subgrafo en mainnet? +### 27. How do I upgrade a subgraph on mainnet? -Si eres un desarrollador de subgrafos, puedes actualizar una nueva versión de tus subgrafos a Studio utilizando la CLI. En ese momento será privado, pero si estás contento con él, puedes publicarlo en the Graph Explorer descentralizado. Esto creará una nueva versión de tu subgrafo que los Curadoress pueden empezar a señalar. +If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. From f707b62107a9d5920b4107bc9222c6cf06078bde Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:21 -0500 Subject: [PATCH 009/241] New translations developer-faq.mdx (Arabic) --- pages/ar/developer/developer-faq.mdx | 90 ++++++++++++++-------------- 1 file changed, 45 insertions(+), 45 deletions(-) diff --git a/pages/ar/developer/developer-faq.mdx b/pages/ar/developer/developer-faq.mdx index 1f0a5cbd6f81..41449c60e5ab 100644 --- a/pages/ar/developer/developer-faq.mdx +++ b/pages/ar/developer/developer-faq.mdx @@ -1,70 +1,70 @@ --- -title: الأسئلة الشائعة للمطورين +title: Developer FAQs --- -### 1. هل يمكنني حذف ال Subgraph الخاص بي؟ +### 1. Can I delete my subgraph? -لا يمكن حذف ال Subgraph بمجرد إنشائها. +It is not possible to delete subgraphs once they are created. -### 2. هل يمكنني تغيير اسم ال Subgraph الخاص بي؟ +### 2. Can I change my subgraph name? -لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير الاسم. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك حتى يسهل البحث عنه والتعرف عليه من خلال ال Dapps الأخرى. +No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. -### 3. هل يمكنني تغيير حساب GitHub المرتبط ب Subgraph الخاص بي؟ +### 3. Can I change the GitHub account associated with my subgraph? -لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير حساب GitHub المرتبط. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. -### 4. هل يمكنني إنشاء Subgraph إذا لم تكن العقود الذكية الخاصة بي تحتوي على أحداث؟ +### 4. Am I still able to create a subgraph if my smart contracts don't have events? -من المستحسن جدا أن تقوم بإنشاء عقودك الذكية بحيث يكون لديك أحداث مرتبطة بالبيانات التي ترغب في الاستعلام عنها. يتم تشغيل معالجات الأحداث في subgraph بواسطة أحداث العقد، وهي إلى حد بعيد أسرع طريقة لاسترداد البيانات المفيدة. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. -إذا كانت العقود التي تعمل معها لا تحتوي على أحداث، فيمكن أن يستخدم ال Subgraph معالجات الاتصال والحظر لتشغيل الفهرسة. وهذا غير موصى به لأن الأداء سيكون أبطأ بشكل ملحوظ. +If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. -### 5. هل من الممكن نشر Subgraph واحد تحمل نفس الاسم لشبكات متعددة؟ +### 5. Is it possible to deploy one subgraph with the same name for multiple networks? -ستحتاج إلى أسماء مختلفه لشبكات متعددة. ولا يمكن أن يكون لديك Subgraph مختلف تحت نفس الاسم ، إلا أن هناك طرقًا ملائمة لأمتلاك قاعدة بيانات واحدة لشبكات متعددة. اكتشف المزيد حول هذا الأمر في وثائقنا: [ إعادة نشر ال Subgraph ](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. كيف تختلف النماذج عن مصادر البيانات؟ +### 6. How are templates different from data sources? -تسمح لك النماذج بإنشاء مصادر البيانات على الفور ، أثناء فهرسة ال Subgraph الخاص بك. قد يكون الأمر هو أن عقدك سينتج عنه عقود جديدة عندما يتفاعل الأشخاص معه ، وبما أنك تعرف شكل هذه العقود (ABI ، الأحداث ، إلخ) مسبقًا ، يمكنك تحديد الطريقة التي تريد فهرستها بها في النموذج ومتى يتم إنتاجها ، وسيقوم ال Subgraph الخاص بك بإنشاء مصدر بيانات ديناميكي عن طريق توفير عنوان العقد. +Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. -راجع قسم "إنشاء نموذج مصدر بيانات" في: [ نماذج مصدر البيانات ](/developer/create-subgraph-hosted#data-source-templates). +Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). -### 7. كيف أتأكد من أنني أستخدم أحدث إصدار من graph-node لعمليات النشر المحلية الخاصة بي؟ +### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? -يمكنك تشغيل الأمر التالي: +You can run the following command: ```sh docker pull graphprotocol/graph-node:latest ``` -** ملاحظة: ** ستستخدم docker / docker-compose دائمًا أي إصدار من graph-node تم سحبه في المرة الأولى التي قمت بتشغيلها ، لذلك من المهم القيام بذلك للتأكد من أنك محدث بأحدث إصدار graph-node. +**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. -### 8. كيف يمكنني استدعاء دالة العقد أو الوصول إلى متغير الحالة العامة من Subgraph mappings الخاصة بي؟ +### 8. How do I call a contract function or access a public state variable from my subgraph mappings? -ألقِ نظرة على حالة ` الوصول إلى العقد الذكي ` داخل القسم [ AssemblyScript API ](/developer/assemblyscript-api). +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). -### 9. هل من الممكن إنشاء Subgraph باستخدام`graph init` from `graph-cli`بعقدين؟ أو هل يجب علي إضافة مصدر بيانات آخر يدويًا في ` subgraph.yaml ` بعد تشغيل ` graph init `؟ +### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? -للأسف هذا غير ممكن حاليا. الغرض من ` graph init ` هو أن تكون نقطة بداية أساسية حيث يمكنك من خلالها إضافة المزيد من مصادر البيانات يدويًا. +Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. -### 10. أرغب في المساهمة أو إضافة مشكلة GitHub ، أين يمكنني العثور على مستودعات مفتوحة المصدر؟ +### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. ما هي الطريقة الموصى بها لإنشاء معرفات "تلقائية" لكيان عند معالجة الأحداث؟ +### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? -إذا تم إنشاء كيان واحد فقط أثناء الحدث ولم يكن هناك أي شيء متاح بشكل أفضل ، فسيكون hash الإجراء + فهرس السجل فريدا. يمكنك تشويشها عن طريق تحويلها إلى Bytes ثم تمريرها عبر ` crypto.keccak256 ` ولكن هذا لن يجعلها فريدة من نوعها. +If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 12. عند الاستماع إلى عدة عقود ، هل من الممكن تحديد أمر العقد للاستماع إلى الأحداث؟ +### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? -ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 13. هل من الممكن التفريق بين الشبكات (mainnet، Kovan، Ropsten، local) من داخل معالجات الأحداث؟ +### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? -نعم. يمكنك القيام بذلك عن طريق استيراد ` graph-ts ` كما في المثال أدناه: +Yes. You can do this by importing `graph-ts` as per the example below: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,39 +73,39 @@ dataSource.network() dataSource.address() ``` -### 14. هل تدعم معالجات الكتل والإستدعاء على Rinkeby؟ +### 14. Do you support block and call handlers on Rinkeby? -في Rinkeby ، ندعم معالجات الكتل ، لكن بدون ` filter: call `. معالجات الاستدعاء غير مدعومة في الوقت الحالي. +On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. -### 15. هل يمكنني استيراد ethers.js أو مكتبات JS الأخرى إلى ال Subgraph mappings الخاصة بي؟ +### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? -ليس حاليًا ، حيث تتم كتابة ال mappings في AssemblyScript. أحد الحلول البديلة الممكنة لذلك هو تخزين البيانات الأولية في الكيانات وتنفيذ المنطق الذي يتطلب مكتبات JS على ال client. +Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. -### 16. هل من الممكن تحديد الكتلة التي سيتم بدء الفهرسة عليها؟ +### 16. Is it possible to specifying what block to start indexing on? -نعم. يحدد ` dataSources.source.startBlock ` في ملف ` subgraph.yaml ` رقم الكتلة الذي يبدأ مصدر البيانات الفهرسة منها. في معظم الحالات نقترح استخدام الكتلة التي تم إنشاء العقد من خلالها: Start blocks +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks -### 17. هل هناك بعض النصائح لتحسين أداء الفهرسة؟ تستغرق مزامنة ال subgraph وقتًا طويلاً جدًا. +### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. -نعم ، يجب إلقاء نظرة على ميزة start block الاختيارية لبدء الفهرسة من الكتل التي تم نشر العقد: [ start block ](/developer/create-subgraph-hosted#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) -### 18. هل هناك طريقة للاستعلام عن ال Subgraph بشكل مباشر مباشرةً رقم الكتلة الأخير الذي تمت فهرسته؟ +### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? -نعم! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. ما هي الشبكات الذي يدعمها The Graph؟ +### 19. What networks are supported by The Graph? -تدعم graph-node أي سلسلة API JSON RPC متوافقة مع EVM. +The graph-node supports any EVM-compatible JSON RPC API chain. -شبكة The Graph تدعم ال subgraph وذلك لفهرسة mainnet Ethereum: +The Graph Network supports subgraphs indexing mainnet Ethereum: - `mainnet` -في ال Hosted Service ، يتم دعم الشبكات التالية: +In the Hosted Service, the following networks are supported: - Ethereum mainnet - Kovan @@ -129,9 +129,9 @@ curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"or - Fuse - Moonbeam - Arbitrum One -- (Arbitrum Testnet (on Rinkeby +- Arbitrum Testnet (on Rinkeby) - Optimism -- (Optimism Testnet (on Kovan +- Optimism Testnet (on Kovan) There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). From 15535f8d9065d0a5bd872d37f4e69e5cb69250b5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:25 -0500 Subject: [PATCH 010/241] New translations distributed-systems.mdx (Spanish) --- pages/es/developer/distributed-systems.mdx | 50 +++++++++++----------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/pages/es/developer/distributed-systems.mdx b/pages/es/developer/distributed-systems.mdx index bfbc733c4107..894fcbe2e18b 100644 --- a/pages/es/developer/distributed-systems.mdx +++ b/pages/es/developer/distributed-systems.mdx @@ -1,37 +1,37 @@ --- -title: Sistemas Distribuidos +title: Distributed Systems --- -The Graph es un protocolo implementado como un sistema distribuido. +The Graph is a protocol implemented as a distributed system. -Las conexiones fallan. Las solicitudes llegan fuera de orden. Diferentes computadoras con relojes y estados desincronizados procesan solicitudes relacionadas. Los servidores se reinician. Las reorganizaciones se producen entre las solicitudes. Estos problemas son inherentes a todos los sistemas distribuidos, pero se agravan en los sistemas que funcionan a escala mundial. +Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. -Considera este ejemplo de lo que puede ocurrir si un cliente pregunta a un Indexador por los últimos datos durante una reorganización. +Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. -1. El indexador ingiere el bloque 8 -2. Solicitud servida al cliente para el bloque 8 -3. El indexador ingiere el bloque 9 -4. El indexador ingiere el bloque 10A -5. Solicitud servida al cliente para el bloque 10A -6. El indexador detecta la reorganización a 10B y retrocede a 10A -7. Solicitud servida al cliente para el bloque 9 -8. El indexador ingiere el bloque 10B -9. El indexador ingiere el bloque 11 -10. Solicitud servida al cliente para el bloque 11 +1. Indexer ingests block 8 +2. Request served to the client for block 8 +3. Indexer ingests block 9 +4. Indexer ingests block 10A +5. Request served to the client for block 10A +6. Indexer detects reorg to 10B and rolls back 10A +7. Request served to the client for block 9 +8. Indexer ingests block 10B +9. Indexer ingests block 11 +10. Request served to the client for block 11 -Desde el punto de vista del indexador, las cosas avanzan lógicamente. El tiempo avanza, aunque tuvimos que hacer retroceder un uncle bloque y jugar el bloque bajo el consenso hacia adelante en la parte superior. En el camino, el Indexador sirve las peticiones utilizando el último estado que conoce en ese momento. +From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. -Sin embargo, desde el punto de vista del cliente, las cosas parecen caóticas. El cliente observa que las respuestas fueron para los bloques 8, 10, 9 y 11 en ese orden. Lo llamamos el problema del "block wobble" (bamboleo del bloque). Cuando un cliente experimenta un bamboleo de bloques, los datos pueden parecer contradecirse a lo largo del tiempo. La situación se agrava si tenemos en cuenta que no todos los indexadores ingieren los últimos bloques de forma simultánea, y tus peticiones pueden ser dirigidas a varios indexadores. +From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. -Es responsabilidad del cliente y del servidor trabajar juntos para proporcionar datos coherentes al usuario. Hay que utilizar diferentes enfoques en función de la coherencia deseada, ya que no existe un programa adecuado para todos los problemas. +It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. -Razonar las implicancias de los sistemas distribuidos es difícil, pero la solución puede no serlo! Hemos establecido APIs y patrones para ayudarte a navegar por algunos casos de uso comunes. Los siguientes ejemplos ilustran estos patrones pero eluden los detalles requeridos por el código de producción (como el manejo de errores y la cancelación) para no ofuscar las ideas principales. +Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. -## Sondeo para obtener datos actualizados +## Polling for updated data -The Graph proporciona la API `block: { number_gte: $minBlock }`, que asegura que la respuesta es para un solo bloque igual o superior a `$minBlock`. Si la petición se realiza a una instancia de `graph-node` y el bloque mínimo no está aún sincronizado, `graph-node` devolverá un error. Si `graph-node` ha sincronizado el bloque mínimo, ejecutará la respuesta para el último bloque. Si la solicitud se hace a un Edge & Node Gateway, el Gateway filtrará los Indexadores que aún no hayan sincronizado el bloque mínimo y hará la solicitud para el último bloque que el Indexador haya sincronizado. +The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. -Podemos utilizar `number_gte` para asegurarnos de que el tiempo nunca viaja hacia atrás cuando se realizan sondeos de datos en un loop. Aquí hay un ejemplo: +We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: ```javascript /// Updates the protocol.paused variable to the latest @@ -73,11 +73,11 @@ async function updateProtocolPaused() { } ``` -## Obtención de un conjunto de elementos relacionados +## Fetching a set of related items -Otro caso de uso es la recuperación de un conjunto grande o, más generalmente, la recuperación de elementos relacionados a través de múltiples solicitudes. A diferencia del caso del sondeo (en el que la coherencia deseada era avanzar en el tiempo), la coherencia deseada es para un único punto en el tiempo. +Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. -Aquí utilizaremos el argumento `block: { hash: $blockHash }` para anclar todos nuestros resultados al mismo bloque. +Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. ```javascript /// Gets a list of domain names from a single block using pagination @@ -129,4 +129,4 @@ async function getDomainNames() { } ``` -Ten en cuenta que en caso de reorganización, el cliente tendrá que reintentar desde la primera solicitud para actualizar el hash del bloque a un non-uncle bloque. +Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. From 93b21e9a52c3f86b24523fda8977d1b90b6484c3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:27 -0500 Subject: [PATCH 011/241] New translations create-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/create-subgraph-hosted.mdx | 410 +++++++++--------- 1 file changed, 205 insertions(+), 205 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index c95b98fdc85d..ccb4432abba2 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -1,10 +1,10 @@ --- -title: إنشاء الـ Subgraph +title: Create a Subgraph --- -قبل التمكن من استخدام Graph CLI ، يلزمك إنشاء الـ subgraph الخاص بك في [ Subgraph Studio ](https://thegraph.com/studio). ستتمكن بعد ذلك من إعداد مشروع الـ subgraph الخاص بك ونشره على المنصة الي تختارها. لاحظ أنه لن يتم نشر ** الـ subgraphs التي لا تقوم بفهرسة mainnet لإيثريوم على شبكة The Graph **. +Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. -يمكن استخدام الأمر `graph init ` لإعداد مشروع subgraph جديد ، إما من عقد موجود على أي من شبكات Ethereum العامة ، أو من مثال subgraph. يمكن استخدام هذا الأمر لإنشاء subgraph في Subgraph Studio عن طريق تمرير `graph init --product subgraph-studio`. إذا كان لديك بالفعل عقد ذكي تم نشره على شبكة Ethereum mainnet أو إحدى شبكات testnets ، فإن تمهيد subgraph جديد من هذا العقد يمكن أن يكون طريقة جيدة للبدء. لكن أولا ، لنتحدث قليلا عن الشبكات التي يدعمها The Graph. +The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. ## الشبكات المدعومة @@ -12,7 +12,7 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: - `mainnet` -** يتم دعم الشبكات الإضافية في الإصدار beta على Hosted Service **: +**Additional Networks are supported in beta on the Hosted Service**: - `mainnet` - `kovan` @@ -44,13 +44,13 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: - `aurora` - `aurora-testnet` -يعتمد Graph's Hosted Service على استقرار وموثوقية التقنيات الأساسية ، وهي نقاط JSON RPC endpoints. المتوفرة. سيتم تمييز الشبكات الأحدث على أنها في مرحلة beta حتى تثبت الشبكة نفسها من حيث الاستقرار والموثوقية وقابلية التوسع. خلال هذه الفترة beta ، هناك خطر حدوث عطل وسلوك غير متوقع. +The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. -تذكر أنك ** لن تكون قادرا ** على نشر subgraph يفهرس شبكة non-mainnet لـ شبكة Graph اللامركزية في [Subgraph Studio ](/ studio / subgraph-studio). +Remember that you will **not be able** to publish a subgraph that indexes a non-mainnet network to the decentralized Graph Network in [Subgraph Studio](/studio/subgraph-studio). -## من عقد موجود +## From An Existing Contract -الأمر التالي ينشئ subgraph يفهرس كل الأحداث للعقد الموجود. إنه يحاول جلب ABI للعقد من Etherscan ويعود إلى طلب مسار ملف محلي. إذا كانت أي من arguments الاختيارية مفقودة ، فسيأخذك عبر نموذج تفاعلي. +The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. ```sh graph init \ @@ -61,23 +61,23 @@ graph init \ [] ``` -`` هو ID لـ subgraph الخاص بك في Subgraph Studio ، ويمكن العثور عليه في صفحة تفاصيل الـ subgraph. +The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. -## من مثال Subgraph +## From An Example Subgraph -الوضع الثاني `graph init` يدعم إنشاء مشروع جديد من مثال subgraph. الأمر التالي يقوم بهذا: +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: ``` graph init --studio ``` -يعتمد مثال الـ subgraph على عقد Gravity بواسطة Dani Grant الذي يدير avatars للمستخدم ويصدر أحداث ` NewGravatar ` أو ` UpdateGravatar ` كلما تم إنشاء avatars أو تحديثها. يعالج الـ subgraph هذه الأحداث عن طريق كتابة كيانات ` Gravatar ` إلى مخزن Graph Node والتأكد من تحديثها وفقا للأحداث. ستنتقل الأقسام التالية إلى الملفات التي تشكل الـ subgraph manifest لهذا المثال. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. ## The Subgraph Manifest -Subgraph manifest `subgraph.yaml` تحدد العقود الذكية لفهارس الـ subgraph الخاص بك ، والأحداث من هذه العقود التي يجب الانتباه إليها ، وكيفية عمل map لبيانات الأحداث للكيانات التي تخزنها Graph Node وتسمح بالاستعلام عنها. يمكن العثور على المواصفات الكاملة لـ subgraph manifests [ هنا ](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -بالنسبة لمثال الـ subgraph ،يكون الـ ` subgraph.yaml `: +For the example subgraph, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -118,59 +118,59 @@ dataSources: file: ./src/mapping.ts ``` -الإدخالات الهامة لتحديث manifest هي: +The important entries to update for the manifest are: -- ` description`: وصف يمكن قراءته لماهية الـ subgraph. يتم عرض هذا الوصف بواسطة Graph Explorer عند نشر الـ subgraph على الـ Hosted Service. +- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. -- `repository`: عنوان URL للمخزن حيث يمكن العثور على subgraph manifest. يتم أيضا عرض هذا بواسطة Graph Explorer. +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. -- `features`: قائمة بجميع أسماء الـ [ الميزات](#experimental-features) المستخدمة. +- `features`: a list of all used [feature](#experimental-features) names. -- `dataSources.source`: عنوان العقد الذكي ،و مصادر الـ subgraph ، و abi استخدام العقد الذكي. العنوان اختياري. وبحذفه يسمح بفهرسة الأحداث المطابقة من جميع العقود. +- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. -- `dataSources.source.startBlock`: الرقم الاختياري للكتلة والتي يبدأ مصدر البيانات بالفهرسة منها. في معظم الحالات نقترح استخدام الكتلة التي تم إنشاء العقد من خلالها. +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. -- `dataSources.mapping.entities`: الكيانات التي يكتبها مصدر البيانات إلى المخزن. يتم تحديد مخطط كل كيان في ملف schema.graphql. +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. -- `dataSources.mapping.abis`: ملف ABI واحد أو أكثر لعقد المصدر بالإضافة إلى أي عقود ذكية أخرى تتفاعل معها من داخل الـ mappings. +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. - `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. - `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. بدون فلتر، سيتم تشغيل معالج الكتلة في كل كتلة. يمكن توفير فلتر اختياري مع الأنواع التالية: call`. سيعمل فلتر ` call` على تشغيل المعالج إذا كانت الكتلة تحتوي على استدعاء واحد على الأقل لعقد مصدر البيانات. +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. -يمكن لـ subgraph واحد فهرسة البيانات من عقود ذكية متعددة. أضف إدخالا لكل عقد يجب فهرسة البيانات منه إلى مصفوفة ` dataSources `. +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. -يتم ترتيب الـ triggers لمصدر البيانات داخل الكتلة باستخدام العملية التالية: +The triggers for a data source within a block are ordered using the following process: -1. يتم ترتيب triggers الأحداث والاستدعاءات أولا من خلال فهرس الإجراء داخل الكتلة. -2. يتم ترتيب triggers الحدث والاستدعاء في نفس الإجراء باستخدام اصطلاح: يتم تفعيل مشغلات الحدث أولا ثم مشغلات الاستدعاء (event triggers first then call triggers) ، ويحترم كل نوع الترتيب المحدد في الـ manifest. -3. يتم تشغيل مشغلات الكتلة بعد مشغلات الحدث والاستدعاء، بالترتيب المحدد في الـ manifest. +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers with in the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. -قواعد الترتيب هذه عرضة للتغيير. +These ordering rules are subject to change. ### Getting The ABIs -يجب أن تتطابق ملف (ملفات) ABI مع العقد (العقود) الخاصة بك. هناك عدة طرق للحصول على ملفات ABI: +The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: -- إذا كنت تقوم ببناء مشروعك الخاص ، فمن المحتمل أن تتمكن من الوصول إلى أحدث ABIs. -- إذا كنت تقوم ببناء subgraph لمشروع عام ، فيمكنك تنزيل هذا المشروع على جهاز الكمبيوتر الخاص بك والحصول على ABI باستخدام [ ` truffle compile ` ](https://truffleframework.com/docs/truffle/overview) أو استخدام solc للترجمة. -- يمكنك أيضا العثور على ABI على [ Etherscan ](https://etherscan.io/) ، ولكن هذا ليس موثوقا به دائما ، حيث قد يكون ABI الذي تم تحميله هناك قديما. تأكد من أن لديك ABI الصحيح ، وإلا فإن تشغيل الـ subgraph الخاص بك سيفشل. +- If you are building your own project, you will likely have access to your most current ABIs. +- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or using solc to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. -## مخطط GraphQL +## The GraphQL Schema -مخطط الـ subgraph الخاص بك موجود في الملف ` schema.graphql `. يتم تعريف مخططات GraphQL باستخدام لغة تعريف واجهة GraphQL. إذا لم تكتب مخطط GraphQL مطلقا ، فمن المستحسن أن تقوم بمراجعة هذا التمهيد على نظام نوع GraphQL. يمكن العثور على الوثائق المرجعية لمخططات GraphQL في قسم [ GraphQL API ](/developer/graphql-api). +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. -## تعريف الكيانات +## Defining Entities -قبل تعريف الكيانات ، من المهم التراجع والتفكير في كيفية هيكلة بياناتك وربطها. سيتم إجراء جميع الاستعلامات لنموذج البيانات المعرفة في مخطط الـ subgraph والكيانات المفهرسة بواسطة الـ subgraph. لهذا السبب ، من الجيد تعريف مخطط الـ subgraph بطريقة تتوافق مع احتياجات الـ dapp الخاص بك. قد يكون من المفيد تصور الكيانات على أنها "كائنات تحتوي على بيانات" ، وليس أحداثا أو دوال. +Before defining entities, it is important to take a step back and think about how your data is structured and linked. All queries will be made against the data model defined in the subgraph schema and the entities indexed by the subgraph. Because of this, it is good to define the subgraph schema in a way that matches the needs of your dapp. It may be useful to imagine entities as "objects containing data", rather than as events or functions. -بواسطة The Graph ، يمكنك ببساطة تحديد أنواع الكيانات في ` schema.graphql ` ، وسيقوم Graph Node بإنشاء حقول المستوى الأعلى للاستعلام عن الـ instances الفردية والمجموعات من هذا النوع من الكيانات. كل نوع يجب أن يكون كيانا يكون مطلوبا للتعليق عليه باستخدام التوجيه `entity `. +With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. -### مثال جيد +### Good Example -تم تنظيم الكيان ` Gravatar ` أدناه حول كائن Gravatar وهو مثال جيد لكيفية تعريف الكيان. +The `Gravatar` entity below is structured around a Gravatar object and is a good example of how an entity could be defined. ```graphql type Gravatar @entity { @@ -182,9 +182,9 @@ type Gravatar @entity { } ``` -### مثال سيئ +### Bad Example -يستند مثالان الكيانات أدناه ` GravatarAccepted ` و ` GravatarDeclined ` إلى أحداث. لا يوصى بعمل map الأحداث أو استدعاءات الدوال للكيانات 1: 1. +The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. ```graphql type GravatarAccepted @entity { @@ -202,35 +202,35 @@ type GravatarDeclined @entity { } ``` -### الحقول الاختيارية والمطلوبة +### Optional and Required Fields -يمكن تعريف حقول الكيانات على أنها مطلوبة أو اختيارية. الحقول المطلوبة يشار إليها بواسطة `!` في المخطط. إذا لم يتم تعيين حقل مطلوب في الـ mapping ، فستتلقى هذا الخطأ عند الاستعلام عن الحقل: +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: ``` Null value resolved for non-null field 'name' ``` -يجب أن يكون لكل كيان حقل ` id` ، وهو من النوع ` ID!` (string). حقل `id` يقدم كمفتاح رئيسي ويجب أن يكون فريدا في كل الكيانات لنفس النوع. +Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. -### أنواع المقاييس المضمنة +### Built-In Scalar Types -#### المقاييس المدعومة من GraphQL +#### GraphQL Supported Scalars -ندعم المقاييس التالية في GraphQL API الخاصة بنا: +We support the following scalars in our GraphQL API: -| النوع | الوصف | -| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | -| `ID` | يتم تخزينه كـ `string`. | -| `String` | لقيم ` string`. لا يتم دعم اNull ويتم إزالتها تلقائيا. | -| `Boolean` | لقيم `boolean`. | -| `Int` | GraphQL spec تعرف `Int` بحجم 32 بايت. | -| `BigInt` | أعداد صحيحة كبيرة. يستخدم لأنواع Ethereum ` uint32 ` ، ` int64 ` ، ` uint64 ` ، ... ، ` uint256 `. ملاحظة: كل شيء تحت ` uint32 ` ، مثل ` int32 ` أو ` uint24 ` أو ` int8 ` يتم تمثيله كـ ` i32 `. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. يتراوح نطاق الأس من −6143 إلى +6144. مقربة إلى 34 رقما. | +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums -يمكنك أيضا إنشاء enums داخل مخطط. Enums لها البناء التالي: +You can also create enums within a schema. Enums have the following syntax: ```graphql enum TokenStatus { @@ -240,19 +240,19 @@ enum TokenStatus { } ``` -بمجرد تعريف الـ enum في المخطط ، يمكنك استخدام string لقيمة الـ enum لتعيين حقل الـ enum في الكيان. على سبيل المثال ، يمكنك تعيين ` tokenStatus ` إلى ` SecondOwner ` عن طريق تعريف الكيان أولا ثم تعيين الحقل بعد ذلك بـ `entity.tokenStatus = "SecondOwner`. يوضح المثال أدناه الشكل الذي سيبدو عليه كيان التوكن في حقل الـ enum: +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: -يمكن العثور على مزيد من التفاصيل حول كتابة الـ enums في [GraphQL documentation](https://graphql.org/learn/schema/). +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). -#### علاقات الكيانات +#### Entity Relationships -قد يكون للكيان علاقة بواحد أو أكثر من الكيانات الأخرى في مخططك. قد يتم اجتياز هذه العلاقات في استعلاماتك. العلاقات في The Graph تكون أحادية الاتجاه. من الممكن محاكاة العلاقات ثنائية الاتجاه من خلال تعريف علاقة أحادية الاتجاه على "طرفي" العلاقة. +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. -يتم تعريف العلاقات على الكيانات تماما مثل أي حقل آخر عدا أن النوع المحدد هو كيان آخر. +Relationships are defined on entities just like any other field except that the type specified is that of another entity. -#### العلاقات واحد لواحد +#### One-To-One Relationships -عرف نوع كيان ` Transaction` بعلاقة فردية اختيارية مع نوع كيان ` TransactionReceipt `: +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: ```graphql type Transaction @entity { @@ -266,9 +266,9 @@ type TransactionReceipt @entity { } ``` -#### علاقات واحد لمتعدد +#### One-To-Many Relationships -عرف نوع كيان ` TokenBalance ` بعلاقة واحد لمتعدد المطلوبة مع نوع كيان Token: +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: ```graphql type Token @entity { @@ -282,15 +282,15 @@ type TokenBalance @entity { } ``` -#### البحث العكسي +#### Reverse Lookups -يمكن تعريف البحث العكسي لكيان من خلال الحقل `derivedFrom `. يؤدي هذا إلى إنشاء حقل افتراضي للكيان الذي قد يتم الاستعلام عنه ولكن لا يمكن تعيينه يدويا من خلال الـ mappings API. بالأحرى، هو مشتق من العلاقة المعرفة للكيان الآخر. بالنسبة لمثل هذه العلاقات ، نادرا ما يكون من المنطقي تخزين جانبي العلاقة ، وسيكون أداء الفهرسة والاستعلام أفضل عندما يتم تخزين جانب واحد فقط ويتم اشتقاق الجانب الآخر. +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -بالنسبة لعلاقات واحد_لمتعدد ، يجب دائما تخزين العلاقة في جانب "واحد" ، ويجب دائما اشتقاق جانب "المتعدد". سيؤدي تخزين العلاقة بهذه الطريقة ، بدلا من تخزين مجموعة من الكيانات على الجانب "متعدد" ، إلى أداء أفضل بشكل كبير لكل من فهرسة واستعلام الـ subgraph. بشكل عام ، يجب تجنب تخزين مصفوفات الكيانات. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. -#### مثال +#### Example -يمكننا إنشاء أرصدة لتوكن يمكن الوصول إليه من التوكن عن طريق اشتقاق حقل ` tokenBalances `: +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: ```graphql type Token @entity { @@ -305,13 +305,13 @@ type TokenBalance @entity { } ``` -#### علاقات متعدد_لمتعدد +#### Many-To-Many Relationships -بالنسبة لعلاقات متعدد_لمتعدد ، مثل المستخدمين الذين قد ينتمي كل منهم إلى عدد من المؤسسات ، فإن الطريقة الأكثر وضوحا ، ولكنها ليست الأكثر أداء بشكل عام ، طريقة لنمذجة العلاقة كمصفوفة في كل من الكيانين المعنيين. إذا كانت العلاقة متماثلة ، فيجب تخزين جانب واحد فقط من العلاقة ويمكن اشتقاق الجانب الآخر. +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. -#### مثال +#### Example -عرف البحث العكسي من نوع كيان ` User` إلى نوع كيان ` Organization`. في المثال أدناه ، يتم تحقيق ذلك من خلال البحث عن خاصية` members ` من داخل كيان ` Organization `. في الاستعلامات ، سيتم حل حقل ` organizations` في ` User` من خلال البحث عن جميع كيانات ` Organization` التي تتضمن ID المستخدم. +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. ```graphql type Organization @entity { @@ -327,7 +327,7 @@ type User @entity { } ``` -هناك طريقة أكثر فاعلية لتخزين هذه العلاقة وهي من خلال جدول mapping يحتوي على إدخال واحد لكل زوج ` User` / ` Organization` بمخطط مثل +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like ```graphql type Organization @entity { @@ -349,7 +349,7 @@ type UserOrganization @entity { } ``` -يتطلب هذا الأسلوب أن تنحدر الاستعلامات إلى مستوى إضافي واحد لاستردادها ، على سبيل المثال ، المؤسسات للمستخدمين: +This approach requires that queries descend into one additional level to retrieve, for example, the organizations for users: ```graphql query usersWithOrganizations { @@ -364,11 +364,11 @@ query usersWithOrganizations { } ``` -هذه الطريقة الأكثر إتقانا لتخزين علاقات متعدد_لمتعدد ستؤدي إلى بيانات مخزنة أقل للـ subgraph، وبالتالي غالبا إلى subgraph ما يكون أسرع في الفهرسة والاستعلام. +This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. -#### إضافة تعليقات إلى المخطط +#### Adding comments to the schema -وفقا لمواصفات GraphQL ، يمكن إضافة التعليقات فوق خاصيات كيان المخطط باستخدام الاقتباسات المزدوجة ` "" `. هذا موضح في المثال أدناه: +As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -378,13 +378,13 @@ type MyFirstEntity @entity { } ``` -## تعريف حقول البحث عن النص الكامل +## Defining Fulltext Search Fields -استعلامات بحث النص الكامل تقوم بفلترة وترتيب الكيانات بناء على إدخال نص البحث. استعلامات النص الكامل قادرة على إرجاع التطابقات للكلمات المتشابهة عن طريق معالجة إدخال نص الاستعلام إلى الـ stems قبل مقارنة ببيانات النص المفهرس. +Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. -تعريف استعلام النص الكامل يتضمن اسم الاستعلام وقاموس اللغة المستخدم لمعالجة حقول النص وخوارزمية الترتيب المستخدمة لترتيب النتائج والحقول المضمنة في البحث. كل استعلام نص كامل قد يمتد إلى عدة حقول ، ولكن يجب أن تكون جميع الحقول المضمنة من نوع كيان واحد. +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. -لإضافة استعلام نص كامل ، قم بتضمين نوع ` _Schema_ ` مع نص كامل موجه في مخطط GraphQL. +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. ```graphql type _Schema_ @@ -407,7 +407,7 @@ type Band @entity { } ``` -يمكن استخدام حقل المثال ` bandSearch ` في الاستعلامات لفلترة كيانات ` Band ` استنادا إلى المستندات النصية في الـ ` name ` ، ` description` و ` bio `. انتقل إلى [GraphQL API - Queries](/developer/graphql-api#queries) للحصول على وصف لـ API بحث النص الكامل ولمزيد من الأمثلة المستخدمة. +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. ```graphql query { @@ -420,49 +420,49 @@ query { } ``` -> ** [ إدارة الميزات ](#experimental-features): ** من ` specVersion ` ` 0.0.4 ` وما بعده ، يجب الإعلان عن ` fullTextSearch ` ضمن قسم ` features ` في the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. -### اللغات المدعومة +### Languages supported -اختيار لغة مختلفة سيكون له تأثير نهائي ، على الرغم من دقتها في بعض الأحيان ، إلا أنها تؤثر على API بحث النص الكامل. يتم فحص الحقول التي يغطيها حقل استعلام نص_كامل في سياق اللغة المختارة ، وبالتالي فإن المفردات الناتجة عن التحليل واستعلامات البحث تختلف من لغة إلى لغة. على سبيل المثال: عند استخدام القاموس التركي المدعوم ، فإن "token" ينشأ من "toke" بينما قاموس اللغة الإنجليزية سيشتقها إلى "token". +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". -قواميس اللغة المدعومة: +Supported language dictionaries: -| الرمز | القاموس | -| ------ | ------- | -| simple | عام | -| da | دنماركي | -| nl | هولندي | -| en | إنجليزي | -| fi | فنلندي | -| fr | فرنسي | -| de | ألماني | -| hu | مجري | -| it | إيطالي | -| no | نرويجي | -| pt | برتغالي | -| ro | روماني | -| ru | روسي | -| es | إسباني | -| sv | سويدي | -| tr | تركي | +| Code | Dictionary | +| ------ | ---------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portugese | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | -### خوارزميات التصنيف +### Ranking Algorithms -الخوارزميات المدعومة لترتيب النتائج: +Supported algorithms for ordering results: -| الخوارزمية | الوصف | -| ------------- | ------------------------------------------------------------ | -| rank | استخدم جودة مطابقة استعلام النص-الكامل (0-1) لترتيب النتائج. | -| proximityRank | مشابه لـ rank ولكنه يشمل أيضا القرب من المطابقات. | +| Algorithm | Description | +| ------------- | ----------------------------------------------------------------------- | +| rank | Use the match quality (0-1) of the fulltext query to order the results. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | -## كتابة الـ Mappings +## Writing Mappings -The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. تتم كتابة الـ Mappings في مجموعة فرعية من [ TypeScript ](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) تسمى [AssemblyScript ](https: //github.com/AssemblyScript/assemblyscript/wiki) والتي يمكن ترجمتها إلى WASM ([ WebAssembly ](https://webassembly.org/)). يعتبر AssemblyScript أكثر صرامة من TypeScript العادي ، ولكنه يوفر تركيبا مألوفا. +The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. -لكل معالج حدث تم تعريفه في ` subgraph.yaml ` ضمن ` mapping.eventHandlers ` ، قم بإنشاء دالة صادرة بنفس الاسم. يجب أن يقبل كل معالج بارمترا واحدا يسمى ` event ` بنوع مطابق لاسم الحدث الذي تتم معالجته. +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -في مثال الـ subgraph ، يحتوي ` src / mapping.ts ` على معالجات لأحداث ` NewGravatar ` و ` UpdatedGravatar `: +In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -489,31 +489,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -يأخذ المعالج الأول حدث ` NewGravatar ` وينشئ كيان ` Gravatar ` جديد بـ ` new Gravatar (event.params.id.toHex ()) ` ،مالئا حقول الكيان باستخدام بارامترات الحدث المقابلة. يتم تمثيل instance الكيان بالمتغير ` gravatar ` ، مع قيمة معرف `()event.params.id.toHex `. +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. -يحاول المعالج الثاني تحميل ` Gravatar ` الموجود من مخزن Graph Node. إذا لم يكن موجودا بعد ، فإنه يتم إنشاؤه عند الطلب. يتم بعد ذلك تحديث الكيان لمطابقة بارامترات الحدث الجديدة ، قبل حفظه مرة أخرى في المخزن باستخدام ` ()gravatar.save `. +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. -### الـ IDs الموصى بها لإنشاء كيانات جديدة +### Recommended IDs for Creating New Entities -يجب أن يكون لكل كيان ` id` فريدا بين جميع الكيانات من نفس النوع. يتم تعيين قيمة ` id ` للكيان عند إنشاء الكيان. فيما يلي بعض قيم ` id ` الموصى بها التي يجب مراعاتها عند إنشاء كيانات جديدة. ملاحظة: قيمة ` id `يجب أن تكون `string`. +Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. - `event.params.id.toHex()` - `event.transaction.from.toHex()` - `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` -نحن نقدم [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) التي تحتوي على أدوات مساعدة للتفاعل مع مخزن Graph Node وملائمة للتعامل مع بيانات العقد الذكي والكيانات. يمكنك استخدام هذه المكتبة في mappings الخاص بك عن طريق استيراد `graphprotocol/graph-ts` in `mapping.ts@`. +We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. -## توليد الكود +## Code Generation -من أجل جعل العقود الذكية والأحداث والكيانات سهلة وآمنة ، يمكن لـ Graph CLI إنشاء أنواع AssemblyScript من مخطط subgraph's GraphQL وعقد الـ ABIs المضمنة في مصادر البيانات. +In order to make working smart contracts, events and entities easy and type-safe, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. -يتم ذلك بـ +This is done with ```sh graph codegen [--output-dir ] [] ``` -ولكن في معظم الحالات ، تكون الـ subgraphs مهيأة مسبقا بالفعل عبر ` package.json ` للسماح لك ببساطة بتشغيل واحد مما يلي لتحقيق نفس الشيء: +but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -523,7 +523,7 @@ yarn codegen npm run codegen ``` -سيؤدي هذا إلى إنشاء فئة AssemblyScript لكل عقد ذكي في ملفات ABI المذكورة في ` subgraph.yaml ` ، مما يسمح لك بربط هذه العقود بعناوين محددة في الـ mappings واستدعاء methods العقد للكتلة التي تتم معالجتها. وستنشئ أيضا فئة لكل حدث للعقد لتوفير وصول سهل إلى بارامترات الحدث بالإضافة إلى الكتلة والإجراء التي نشأ منها الحدث. كل هذه الأنواع تكتب إلى `//.ts`. في مثال الـ subgraph ، سيكون هذا `generated/Gravity/Gravity.ts`,مما يسمح للـ mappings باستيراد هذه الأنواع باستخدام +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with ```javascript import { @@ -535,23 +535,23 @@ import { } from '../generated/Gravity/Gravity' ``` -بالإضافة إلى ذلك ، يتم إنشاء فئة واحدة لكل نوع كيان في مخطط الـ subgraph's GraphQL. توفر هذه الفئات إمكانية تحميل كيان نوغ آمن والقراءة والكتابة إلى حقول الكيان بالإضافة إلى `save()` method لكتابة الكيانات للمخزن. تمت كتابة جميع فئات الكيانات إلى `/schema.ts`, مما يسمح للـ mappings باستيرادها باستخدام +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** يجب إجراء إنشاء الكود مرة أخرى بعد كل تغيير في مخطط GraphQL أو ABI المضمنة في الـ manifest. يجب أيضا إجراؤه مرة واحدة على الأقل قبل بناء أو نشر الـ subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. -إنشاء الكود لا يتحقق من كود الـ mapping الخاص بك في `src/mapping.ts`. إذا كنت تريد التحقق من ذلك قبل محاولة نشر الـ subgraph الخاص بك في Graph Explorer ، فيمكنك تشغيل `yarn build` وإصلاح أي أخطاء في تركيب الجملة التي قد يعثر عليها المترجم TypeScript. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. -## قوالب مصدر البيانات +## Data Source Templates -النمط الشائع في عقود Ethereum الذكية هو استخدام عقود السجل أو المصنع ، حيث أحد العقود ينشئ أو يدير أو يشير إلى عدد اعتباطي من العقود الأخرى التي لكل منها حالتها وأحداثها الخاصة. عناوين هذه العقود الفرعية قد تكون أو لا تكون معروفة مقدما وقد يتم إنشاء و / أو إضافة العديد من هذه العقود بمرور الوقت. هذا هو السبب في أنه في مثل هذه الحالات ، يكون تعريف مصدر بيانات واحد أو عدد ثابت من مصادر البيانات أمرا مستحيلا ويلزم اتباع نهج أكثر ديناميكية: _قوالب مصدر البيانات_. +A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. -### مصدر البيانات للعقد الرئيسي +### Data Source for the Main Contract -أولاً ، تقوم بتعريف مصدر بيانات منتظم للعقد الرئيسي. يُظهر المقتطف أدناه مثالا مبسطا لمصدر البيانات لعقد تبادل[ Uniswap ](https://uniswap.io). لاحظ معالج الحدث `NewExchange(address,address)`. يتم اصدار هذا عندما يتم إنشاء عقد تبادل جديد على السلسلة بواسطة عقد المصنع. +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. ```yaml dataSources: @@ -576,9 +576,9 @@ dataSources: handler: handleNewExchange ``` -### قوالب مصدر البيانات للعقود التي تم إنشاؤها ديناميكيا +### Data Source Templates for Dynamically Created Contracts -بعد ذلك ، أضف _ قوالب مصدر البيانات _ إلى الـ manifest. وهي متطابقة مع مصادر البيانات العادية ، باستثناء أنها تفتقر إلى عنوان عقد معرف مسبقا تحت ` source `. عادة ، يمكنك تعريف قالب واحد لكل نوع من أنواع العقود الفرعية المدارة أو المشار إليها بواسطة العقد الأصلي. +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. ```yaml dataSources: @@ -612,9 +612,9 @@ templates: handler: handleRemoveLiquidity ``` -### إنشاء قالب مصدر البيانات +### Instantiating a Data Source Template -في الخطوة الأخيرة ، تقوم بتحديث mapping عقدك الرئيسي لإنشاء instance لمصدر بيانات ديناميكي من أحد القوالب. في هذا المثال ، يمكنك تغيير mapping العقد الرئيسي لاستيراد قالب ` Exchange ` واستدعاء method الـ`Exchange.create(address)` لبدء فهرسة عقد التبادل الجديد. +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. ```typescript import { Exchange } from '../generated/templates' @@ -626,13 +626,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> ** ملاحظة: ** مصدر البيانات الجديد سيعالج فقط الاستدعاءات والأحداث للكتلة التي تم إنشاؤها فيه وجميع الكتل التالية ، ولكنه لن يعالج البيانات التاريخية ، أي البيانات الموجودة في الكتل السابقة. +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. > -> إذا كانت الكتل السابقة تحتوي على بيانات ذات صلة بمصدر البيانات الجديد ، فمن الأفضل فهرسة تلك البيانات من خلال قراءة الحالة الحالية للعقد وإنشاء كيانات تمثل تلك الحالة في وقت إنشاء مصدر البيانات الجديد. +> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. -### سياق مصدر البيانات +### Data Source Context -تسمح سياقات مصدر البيانات بتمرير تكوين إضافي عند عمل instantiating للقالب. في مثالنا ، لنفترض أن التبادلات مرتبطة بزوج تداول معين ، والذي تم تضمينه في حدث ` NewExchange `. That information can be passed into the instantiated data source, like so: +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: ```typescript import { Exchange } from '../generated/templates' @@ -644,7 +644,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -داخل mapping قالب ` Exchange ` ، يمكن الوصول إلى السياق بعد ذلك: +Inside a mapping of the `Exchange` template, the context can then be accessed: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -653,11 +653,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -هناك setters و getters مثل ` setString ` و ` getString ` لجميع أنواع القيم. +There are setters and getters like `setString` and `getString` for all value types. ## Start Blocks -يعد ` startBlock ` إعدادا اختياريا يسمح لك بتحديد كتلة في السلسلة والتي سيبدأ مصدر البيانات بالفهرسة. تعيين كتلة البدء يسمح لمصدر البيانات بتخطي الملايين من الكتل التي ربما ليست ذات صلة. عادةً ما يقوم مطور الرسم البياني الفرعي بتعيين ` startBlock ` إلى الكتلة التي تم فيها إنشاء العقد الذكي لمصدر البيانات. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -683,23 +683,23 @@ dataSources: handler: handleNewEvent ``` -> ** ملاحظة: ** يمكن البحث عن كتلة إنشاء العقد بسرعة على Etherscan: +> **Note:** The contract creation block can be quickly looked up on Etherscan: > -> 1. ابحث عن العقد بإدخال عنوانه في شريط البحث. -> 2. انقر فوق hash إجراء الإنشاء في قسم `Contract Creator`. -> 3. قم بتحميل صفحة تفاصيل الإجراء حيث ستجد كتلة البدء لذلك العقد. +> 1. Search for the contract by entering its address in the search bar. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. Load the transaction details page where you'll find the start block for that contract. -## معالجات الاستدعاء +## Call Handlers -بينما توفر الأحداث طريقة فعالة لجمع التغييرات ذات الصلة بحالة العقد ، تتجنب العديد من العقود إنشاء سجلات لتحسين تكاليف الغاز. في هذه الحالات ، يمكن لـ subgraph الاشتراك في الاستدعاء الذي يتم إجراؤه على عقد مصدر البيانات. يتم تحقيق ذلك من خلال تعريف معالجات الاستدعاء التي تشير إلى signature الدالة ومعالج الـ mapping الذي سيعالج الاستدعاءات لهذه الدالة. لمعالجة هذه المكالمات ، سيتلقى معالج الـ mapping الـ`ethereum.Call` كـ argument مع المدخلات المكتوبة والمخرجات من الاستدعاء. ستؤدي الاستدعاءات التي يتم إجراؤها على أي عمق في سلسلة استدعاء الاجراء إلى تشغيل الـ mapping، مما يسمح بالتقاط النشاط مع عقد مصدر البيانات من خلال عقود الـ proxy. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. -لن يتم تشغيل معالجات الاستدعاء إلا في إحدى الحالتين: عندما يتم استدعاء الدالة المحددة بواسطة حساب آخر غير العقد نفسه أو عندما يتم تمييزها على أنها خارجية في Solidity ويتم استدعاؤها كجزء من دالة أخرى في نفس العقد. +Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> ** ملاحظة: ** معالجات الاستدعاء غير مدعومة في Rinkeby أو Goerli أو Ganache. تعتمد معالجات الاستدعاء حاليا على Parity tracing API و هذه الشبكات لا تدعمها. +> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. -### تعريف معالج الاستدعاء +### Defining a Call Handler -لتعريف معالج استدعاء في الـ manifest الخاص بك ، ما عليك سوى إضافة مصفوفة ` callHandlers ` أسفل مصدر البيانات الذي ترغب في الاشتراك فيه. +To define a call handler in your manifest simply add a `callHandlers` array under the data source you would like to subscribe to. ```yaml dataSources: @@ -724,11 +724,11 @@ dataSources: handler: handleCreateGravatar ``` -الـ `function` هي توقيع الدالة المعياري لفلترة الاستدعاءات من خلالها. خاصية `handler` هي اسم الدالة في الـ mapping الذي ترغب في تنفيذه عندما يتم استدعاء الدالة المستهدفة في عقد مصدر البيانات. +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. -### دالة الـ Mapping +### Mapping Function -كل معالج استدعاء يأخذ بارامترا واحدا له نوع يتوافق مع اسم الدالة التي تم استدعاؤها. في مثال الـ subgraph أعلاه ، يحتوي الـ mapping على معالج عندما يتم استدعاء الدالة ` createGravatar ` ويتلقى البارامتر ` CreateGravatarCall ` كـ argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -743,22 +743,22 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -الدالة ` handleCreateGravatar ` تأخذ ` CreateGravatarCall ` جديد وهو فئة فرعية من`ethereum.Call`, ، مقدم بواسطة `graphprotocol/graph-ts@`, والذي يتضمن المدخلات والمخرجات المكتوبة للاستدعاء. يتم إنشاء النوع ` CreateGravatarCall ` من أجلك عندما تشغل`graph codegen`. +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. -## معالجات الكتلة +## Block Handlers -بالإضافة إلى الاشتراك في أحداث العقد أو استدعاءات الدوال، قد يرغب الـ subgraph في تحديث بياناته عند إلحاق كتل جديدة بالسلسلة. لتحقيق ذلك ، يمكن لـ subgraph تشغيل دالة بعد كل كتلة أو بعد الكتل التي تطابق فلترا معرفا مسبقا. +In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. -### الفلاتر المدعومة +### Supported Filters ```yaml filter: kind: call ``` -_سيتم استدعاء المعالج المعرف مرة واحدة لكل كتلة تحتوي على استدعاء للعقد (مصدر البيانات) الذي تم تعريف المعالج ضمنه._ +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -عدم وجود فلتر لمعالج الكتلة سيضمن أن المعالج يتم استدعاؤه في كل كتلة. يمكن أن يحتوي مصدر البيانات على معالج كتلة واحد فقط لكل نوع فلتر. +The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. ```yaml dataSources: @@ -785,9 +785,9 @@ dataSources: kind: call ``` -### دالة الـ Mapping +### Mapping Function -دالة الـ mapping ستتلقى `ethereum.Block` كوسيطتها الوحيدة. مثل دوال الـ mapping للأحداث ، يمكن لهذه الدالة الوصول إلى كيانات الـ subgraph الموجودة في المخزن، واستدعاء العقود الذكية وإنشاء الكيانات أو تحديثها. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -799,9 +799,9 @@ export function handleBlock(block: ethereum.Block): void { } ``` -## أحداث الـ Anonymous +## Anonymous Events -إذا كنت بحاجة إلى معالجة أحداث anonymous في Solidity ، فيمكن تحقيق ذلك من خلال توفير الموضوع 0 للحدث ، كما في المثال: +If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: ```yaml eventHandlers: @@ -810,20 +810,20 @@ eventHandlers: handler: handleGive ``` -سيتم تشغيل حدث فقط عندما يتطابق كل من التوقيع والموضوع 0. بشكل افتراضي ، `topic0` يساوي hash توقيع الحدث. +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. -## الميزات التجريبية +## Experimental features -بدءًا من ` specVersion ` ` 0.0.4 ` ، يجب الإعلان صراحة عن ميزات الـ subgraph في قسم `features` في المستوى العلوي من ملف الـ manifest ، باستخدام اسم `camelCase` الخاص بهم ، كما هو موضح في الجدول أدناه: +Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: -| الميزة | الاسم | -| ----------------------------------------------------- | ------------------------- | -| [أخطاء غير فادحة](#non-fatal-errors) | `nonFatalErrors` | -| [البحث عن نص كامل](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -| [IPFS على عقود Ethereum](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | +| Feature | Name | +| --------------------------------------------------------- | ------------------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| [IPFS on Ethereum Contracts](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | -على سبيل المثال ، إذا كان الـ subgraph يستخدم ** بحث النص الكامل ** و ** أخطاء غير فادحة ** ، فإن حقل `features` في الـ manifest يجب أن يكون: +For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml specVersion: 0.0.4 @@ -834,27 +834,27 @@ features: dataSources: ... ``` -لاحظ أن استخدام ميزة دون الإعلان عنها سيؤدي إلى حدوث ** خطأ تحقق من الصحة ** أثناء نشر الـ subgraph ، ولكن لن تحدث أخطاء إذا تم الإعلان عن الميزة ولكن لم يتم استخدامها. +Note that using a feature without declaring it will incur in a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. -### IPFS على عقود Ethereum +### IPFS on Ethereum Contracts -حالة الاستخدام الشائعة لدمج IPFS مع Ethereum هي تخزين البيانات على IPFS التي ستكون مكلفة للغاية للحفاظ عليها في السلسلة ، والإشارة إلى IPFS hash في عقود Ethereum. +A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. -بالنظر إلى IPFS hashes هذه ، يمكن لـ subgraphs قراءة الملفات المقابلة من IPFS باستخدام ` ipfs.cat ` و ` ipfs.map `. للقيام بذلك بشكل موثوق ، من الضروري أن يتم تثبيت هذه الملفات على عقدة IPFS التي تتصل بها Graph Node التي تقوم بفهرسة الـ subgraph. في حالة [hosted service](https://thegraph.com/hosted-service),يكون هذا [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). -> ** ملاحظة: ** لا تدعم شبكة Graph حتى الآن ` ipfs.cat ` و ` ipfs.map ` ، ويجب على المطورين عدم النشر الـ subgraphs للشبكة باستخدام تلك الوظيفة عبر الـ Studio. +> **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. -من أجل تسهيل ذلك على مطوري الـ subgraph ، فريق Graph كتب أداة لنقل الملفات من عقدة IPFS إلى أخرى ، تسمى [ ipfs-sync ](https://github.com/graphprotocol/ipfs-sync). +In order to make this easy for subgraph developers, The Graph team wrote a tool for transfering files from one IPFS node to another, called [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). -> **[إدارة الميزات](#experimental-features):** يجب الإعلان عن ` ipfsOnEthereumContracts ` ضمن `features` في subgraph manifest. +> **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. -### أخطاء غير فادحة +### Non-fatal errors -افتراضيا ستؤدي أخطاء الفهرسة في الـ subgraphs التي تمت مزامنتها بالفعل ، إلى فشل الـ subgraph وإيقاف المزامنة. يمكن بدلا من ذلك تكوين الـ Subgraphs لمواصلة المزامنة في حالة وجود أخطاء ، عن طريق تجاهل التغييرات التي أجراها المعالج والتي تسببت في حدوث الخطأ. يمنح هذا منشئوا الـ subgraph الوقت لتصحيح الـ subgraphs الخاصة بهم بينما يستمر تقديم الاستعلامات للكتلة الأخيرة ، على الرغم من أن النتائج قد تكون متعارضة بسبب الخطأ الذي تسبب في الخطأ. لاحظ أن بعض الأخطاء لا تزال كارثية دائما ، ولكي تكون غير فادحة ، يجب أن يُعرف الخطأ بأنه حتمي. +Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. -> ** ملاحظة: ** لا تدعم شبكة Graph حتى الآن الأخطاء غير الفادحة ، ويجب على المطورين عدم نشر الـ subgraphs على الشبكة باستخدام تلك الوظيفة عبر الـ Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. -يتطلب تمكين الأخطاء غير الفادحة تعيين flag الميزة في subgraph manifest كالتالي: +Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: ```yaml specVersion: 0.0.4 @@ -864,7 +864,7 @@ features: ... ``` -يجب أن يتضمن الاستعلام أيضا الاستعلام عن البيانات ذات التناقضات المحتملة من خلال الوسيطة ` subgraphError `. يوصى أيضا بالاستعلام عن ` _meta ` للتحقق مما إذا كان الـ subgraph قد تخطى الأخطاء ، كما في المثال: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -876,7 +876,7 @@ _meta { } ``` -إذا واجه الـ subgraph خطأ فسيرجع هذا الاستعلام كلا من البيانات وخطأ الـ graphql ضمن رسالة ` "indexing_error" ` ، كما في مثال الاستجابة هذا: +If the subgraph encounters an error that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -898,11 +898,11 @@ _meta { ### Grafting onto Existing Subgraphs -عندما يتم نشر الـ subgraph لأول مرة ، فإنه يبدأ في فهرسة الأحداث من كتلة نشوء السلسلة المتوافقة (أو من ` startBlock ` المعرفة مع كل مصدر بيانات) في بعض الحالات ، يكون من المفيد إعادة استخدام البيانات من subgraph موجود وبدء الفهرسة من كتلة لاحقة. يسمى هذا الوضع من الفهرسة بـ _Grafting_. Grafting ، على سبيل المثال ، مفيد أثناء التطوير لتجاوز الأخطاء البسيطة بسرعة في الـ mappings ، أو للحصول مؤقتا على subgraph موجود يعمل مرة أخرى بعد فشله. +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. -> ** ملاحظة: ** الـ Grafting يتطلب أن المفهرس قد فهرس الـ subgraph الأساسي. لا يوصى باستخدامه على شبكة The Graph في الوقت الحالي ، ولا ينبغي للمطورين نشر الـ subgraphs على الشبكة باستخدام تلك الوظيفة عبر الـ Studio. +> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. -يتم عمل Grafte لـ subgraph في الـ subgraph الأساسي عندما يحتوي الـ subgraph manifest في ` subgraph.yaml ` على كتلة ` graft ` في المستوى العلوي: +A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: ```yaml description: ... @@ -911,18 +911,18 @@ graft: block: 7345624 # Block number ``` -عندما يتم نشر subgraph يحتوي الـ manifest على كتلة ` graft ` ، فإن Graph Node سوف تنسخ بيانات ` base ` subgraph بما في ذلك الـ ` block ` المعطى ثم يتابع فهرسة الـ subgraph الجديد من تلك الكتلة. يجب أن يوجد الـ subgraph الأساسي في instance الـ Graph Node المستهدف ويجب أن يكون قد تمت فهرسته حتى الكتلة المحددة على الأقل. بسبب هذا التقييد ، يجب استخدام الـ grafting فقط أثناء التطوير أو أثناء الطوارئ لتسريع إنتاج non-grafted subgraph مكافئ. +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. -Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. أثناء تهيئة الـ grafted subgraph ، سيقوم الـ Graph Node بتسجيل المعلومات حول أنواع الكيانات التي تم نسخها بالفعل. +Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -يمكن أن يستخدم الـ grafted subgraph مخطط GraphQL غير مطابق لمخطط الـ subgraph الأساسي ، ولكنه متوافق معه. يجب أن يكون مخطط الـ subgraph صالحا في حد ذاته ولكنه قد ينحرف عن مخطط الـ subgraph الأساسي بالطرق التالية: +The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: -- يضيف أو يزيل أنواع الكيانات -- يزيل الصفات من أنواع الكيانات -- يضيف صفات nullable لأنواع الكيانات -- يحول صفات non-nullable إلى صفات nullable -- يضيف قيما إلى enums -- يضيف أو يزيل الواجهات +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[إدارة الميزات](#experimental-features):**يجب الإعلان عن ` التطعيم ` ضمن `features` في subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. From a65040f5bdef9a755fbc182522888323efa25ec6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:28 -0500 Subject: [PATCH 012/241] New translations introduction.mdx (Arabic) --- pages/ar/about/introduction.mdx | 48 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/ar/about/introduction.mdx b/pages/ar/about/introduction.mdx index 64f253a7995b..5f840c040400 100644 --- a/pages/ar/about/introduction.mdx +++ b/pages/ar/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: مقدمة +title: Introduction --- -هذه الصفحة ستشرح The Graph وكيف يمكنك أن تبدأ. +This page will explain what The Graph is and how you can get started. -## ما هو The Graph +## What The Graph Is -The Graph هو بروتوكول لامركزي وذلك لفهرسة البيانات والاستعلام عنها من blockchains ، بدءًا من Ethereum. حيث يمكننا من الاستعلام عن البيانات والتي من الصعب الاستعلام عنها بشكل مباشر. +The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. -المشاريع ذات العقود الذكية المعقدة مثل [ Uniswap ](https://uniswap.org/) و NFTs مثل [ Bored Ape Yacht Club ](https://boredapeyachtclub.com/) تقوم بتخزين البيانات على Ethereum blockchain ، مما يجعل من الصعب قراءة أي شيء بشكل مباشر عدا البيانات الأساسية من blockchain. +Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. -في حالة Bored Ape Yacht Club ، يمكننا إجراء قراءات أساسية على [ العقد ](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) مثل الحصول على مالك Ape معين ،أو الحصول على محتوى URI لـ Ape وذلك بناء على ال ID الخاص به، أو إجمالي العرض ، حيث تتم برمجة عمليات القراءة هذه بشكل مباشر في العقد الذكي ، ولكن في العالم الحقيقي هناك استعلامات وعمليات أكثر تقدمًا غير ممكنة مثل التجميع والبحث والعلاقات والفلترة الغير بسيطة. فمثلا، إذا أردنا الاستعلام عن Apes مملوكة لعنوان معين ،وفلترته حسب إحدى خصائصه، فلن نتمكن من الحصول على تلك المعلومات من خلال التفاعل بشكل مباشر مع العقد نفسه. +In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. -للحصول على هذه البيانات، يجب معالجة كل [`التحويلات`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) التي حدثت، وقراءة البيانات الوصفية من IPFS باستخدام Token ID و IPFS hash، ومن ثم تجميعه. حتى بالنسبة لهذه الأنواع من الأسئلة البسيطة نسبيا ، قد يستغرق الأمر ** ساعات أو حتى أيام ** لتطبيق لامركزي (dapp) يعمل في متصفح للحصول على إجابة. +To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. -يمكنك أيضا إنشاء الخادم الخاص بك ، ومعالجة الإجراءات هناك ، وحفظها في قاعدة بيانات ، والقيام ببناء API endpoint من أجل الاستعلام عن البيانات. ومع ذلك ، فإن هذا الخيار يتطلب موارد كثيرة ، ويحتاج إلى صيانة ، ويقدم نقطة فشل واحدة ، ويكسر خصائص الأمان الهامة المطلوبة لتحقيق اللامركزية. +You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. -**إن فهرسة بيانات الـ blockchain أمر صعب.** +**Indexing blockchain data is really, really hard.** -خصائص الـ Blockchain مثل finality أو chain reorganizations أو uncled blocks تعقد هذه العملية بشكل أكبر ، ولن تجعلها مضيعة للوقت فحسب ، بل أيضا تجعلها من الصعب من الناحية النظرية جلب نتائج الاستعلام الصحيحة من بيانات الـ blockchain. +Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. -يقوم The Graph بحل هذا الأمر من خلال بروتوكول لامركزي والذي يقوم بفهرسة والاستعلام عن بيانات الـ blockchain بكفاءة عالية. حيث يمكن بعد ذلك الاستعلام عن APIs (الـ "subgraphs" المفهرسة) باستخدام GraphQL API قياسية. اليوم ، هناك خدمة مستضافة بالإضافة إلى بروتوكول لامركزي بنفس القدرات. كلاهما مدعوم بتطبيق مفتوح المصدر لـ [ Graph Node ](https://github.com/graphprotocol/graph-node). +The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). -## كيف يعمل The Graph +## How The Graph Works -The Graph يفهرس بيانات Ethereumالـ بناء على أوصاف الـ subgraph ، والمعروفة باسم subgraph manifest. حيث أن وصف الـ subgraph يحدد العقود الذكية ذات الأهمية لـ subgraph ، ويحدد الأحداث في تلك العقود التي يجب الانتباه إليها ، وكيفية تعيين بيانات الحدث إلى البيانات التي سيخزنها The Graph في قاعدة البيانات الخاصة به. +The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. -بمجرد كتابة ` subgraph manifest ` ، يمكنك استخدام Graph CLI لتخزين التعريف في IPFS وإخبار المفهرس ببدء فهرسة البيانات لذلك الـ subgraph. +Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. -يقدم هذا الرسم البياني مزيدًا من التفاصيل حول تدفق البيانات عند نشر الـsubgraph manifest ، التعامل مع إجراءات الـ Ethereum: +This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: ![](/img/graph-dataflow.png) -تدفق البيانات يتبع الخطوات التالية: +The flow follows these steps: -1. التطبيق اللامركزي يضيف البيانات إلى الـ Ethereum من خلال إجراء على العقد الذكي. -2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء. -3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك. -4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum. -5. التطبيق اللامركزي يستعلم عن الـ Graph Node للبيانات المفهرسة من الـ blockchain ، باستخدام node's [ GraphQL endpoint](https://graphql.org/learn/). يقوم الـ The Graph Node بدوره بترجمة استعلامات الـ GraphQL إلى استعلامات مخزن البيانات الأساسي الخاص به من أجل جلب هذه البيانات ، والاستفادة من إمكانات فهرسة المخزن. التطبيق اللامركزي يعرض تلك البيانات في واجهة مستخدم ، والتي يمكن للمستخدمين من خلالها إصدار إجراءات جديدة على Ethereum. والدورة تتكرر. +1. A decentralized application adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. -## الخطوات التالية +## Next Steps -في الأقسام التالية سوف نخوض في المزيد من التفاصيل حول كيفية تعريف الـ subgraphs ، وكيفية نشرها ،وكيفية الاستعلام عن البيانات من الفهارس التي يبنيها الـ Graph Node. +In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. -قبل أن تبدأ في كتابة الـ subgraph الخاص بك ، قد ترغب في إلقاء نظرة على The Graph Explorer واستكشاف بعض الـ subgraphs التي تم نشرها. تحتوي الصفحة الخاصة بكل subgraph على playground والذي يتيح لك الاستعلام عن بيانات الـ subgraph باستخدام GraphQL. +Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. From cab9d0e12f67b03520bf77278be7da9703a6ec14 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:29 -0500 Subject: [PATCH 013/241] New translations assemblyscript-api.mdx (Spanish) --- pages/es/developer/assemblyscript-api.mdx | 398 +++++++++++----------- 1 file changed, 199 insertions(+), 199 deletions(-) diff --git a/pages/es/developer/assemblyscript-api.mdx b/pages/es/developer/assemblyscript-api.mdx index 98e4e1fbdb06..2afa431fe8c5 100644 --- a/pages/es/developer/assemblyscript-api.mdx +++ b/pages/es/developer/assemblyscript-api.mdx @@ -2,60 +2,60 @@ title: AssemblyScript API --- -> Nota: ten en cuenta que si creaste un subgrafo usando el `graph-cli`/`graph-ts` en su versión `0.22.0`, debes saber que estás utilizando una versión antigua del AssemblyScript y te recomendamos mirar la [`guía para migrar`](/developer/assemblyscript-migration-guide) tu código +> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) -Está página explica que APIs usar para recibir ciertos datos de los subgrafos. Dos tipos de estas APIs se describen a continuación: +This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: -- La [librería de Graph TypeScript](https://github.com/graphprotocol/graph-ts) (`graph-ts`) y -- el generador de códigos provenientes de los archivos del subgrafo, `graph codegen`. +- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and +- code generated from subgraph files by `graph codegen`. -También es posible añadir otras librerías, siempre y cuando sean compatible con [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Debido a que ese lenguaje de mapeo es el que usamos, la [wiki de AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) es una fuente muy completa para las características de este lenguaje y contiene una librería estándar que te puede resultar útil. +It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. -## Instalación +## Installation -Los subgrafos creados con [`graph init`](/developer/create-subgraph-hosted) vienen configurados previamente. Todo lo necesario para instalar estás configuraciones lo podrás encontrar en uno de los siguientes comandos: +Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: ```sh yarn install # Yarn npm install # NPM ``` -Si el subgrafo fue creado con scratch, uno de los siguientes dos comandos podrá instalar la librería TypeScript como una dependencia: +If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: ```sh yarn add --dev @graphprotocol/graph-ts # Yarn npm install --save-dev @graphprotocol/graph-ts # NPM ``` -## Referencias de API +## API Reference -La librería de `@graphprotocol/graph-ts` proporciona las siguientes APIs: +The `@graphprotocol/graph-ts` library provides the following APIs: -- Una API de `ethereum` para trabajar con contratos inteligentes de Ethereum, eventos, bloques, transacciones y valores de Ethereum. -- Un `almacenamiento` para cargar y guardar entidades en Graph Node. -- Una API de `registro` para registrar los mensajes output de The Graph y el Graph Explorer. -- Una API para `ipfs` que permite cargar archivos provenientes de IPFS. -- Una API de `json` para analizar datos en formato JSON. -- Una API para `crypto` que permite usar funciones criptográficas. -- Niveles bajos que permiten traducir entre los distintos sistemas, tales como, Ethereum, JSON, GraphQL y AssemblyScript. +- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. +- A `store` API to load and save entities from and to the Graph Node store. +- A `log` API to log messages to the Graph Node output and the Graph Explorer. +- An `ipfs` API to load files from IPFS. +- A `json` API to parse JSON data. +- A `crypto` API to use cryptographic functions. +- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. -### Versiones +### Versions -La `apiVersion` en el manifiesto del subgrafo especifica la versión de la API correspondiente al mapeo que está siendo ejecutado en el Graph Node de un subgrafo en específico. La versión actual para la APÍ de mapeo es la 0.0.6. +The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| Version | Notas del lanzamiento | -| :-: | --- | -| 0.0.6 | Se agregó la casilla `nonce` a las Transacciones de Ethereum, se
añadió `baseFeePerGas` para los bloques de Ethereum | -| 0.0.5 | Se actualizó la versión del AssemblyScript a la v0.19.10 (esta incluye cambios importantes, recomendamos leer la [`guía de migración`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` actualizada a `ethereum.transaction.gasLimit` | -| 0.0.4 | Añadido la casilla de `functionSignature` para la función de Ethereum SmartContractCall | -| 0.0.3 | Añadida la casilla `from` para la función de Ethereum Call
`ethereum.call.address` actualizada a `ethereum.call.to` | -| 0.0.2 | Añadida la casilla de `input` para la función de Ethereum Transaction | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types -La documentación sobre las actualizaciones integradas en AssemblyScript puedes encontrarla en la [wiki de AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki/Types). +Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). -Las siguientes integraciones son proporcionada por `@graphprotocol/graph-ts`. +The following additional types are provided by `@graphprotocol/graph-ts`. #### ByteArray @@ -63,24 +63,24 @@ Las siguientes integraciones son proporcionada por `@graphprotocol/graph-ts`. import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray` representa una matriz de `u8`. +`ByteArray` represents an array of `u8`. -_Construcción_ +_Construction_ -- `fromI32(x: i32): ByteArray` - Descompuesta en `x` bytes. -- `fromHexString(hex: string): ByteArray` - La longitud de la entrada debe ser uniforme. Prefijo `0x` es opcional. +- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. +- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. -_Tipo de conversiones_ +_Type conversions_ -- `toHexString(): string` - Convierte un prefijo hexadecimal iniciado con `0x`. -- `toString(): string` - Interpreta los bytes en una cadena UTF-8. -- `toBase58(): string` - Codifica los bytes en una cadena base58. -- `toU32(): u32` - Interpeta los bytes en base a little-endian `u32`. Se ejecuta en casos de un overflow. -- `toI32(): i32` - Interpreta los bytes en base a little-endian `i32`. Se ejecuta en casos de un overflow. +- `toHexString(): string` - Converts to a hex string prefixed with `0x`. +- `toString(): string` - Interprets the bytes as a UTF-8 string. +- `toBase58(): string` - Encodes the bytes into a base58 string. +- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. +- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. -_Operadores_ +_Operators_ -- `equals(y: ByteArray): bool` – se puede escribir como `x == y`. +- `equals(y: ByteArray): bool` – can be written as `x == y`. #### BigDecimal @@ -88,30 +88,30 @@ _Operadores_ import { BigDecimal } from '@graphprotocol/graph-ts' ``` -`BigDecimal` se usa para representar una precisión decimal arbitraria. +`BigDecimal` is used to represent arbitrary precision decimals. -_Construcción_ +_Construction_ -- `constructor(bigInt: BigInt)` – creará un `BigDecimal` en base a un`BigInt`. -- `static fromString(s: string): BigDecimal` – analizará una cadena de decimales. +- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. +- `static fromString(s: string): BigDecimal` – parses from a decimal string. -_Tipo de conversiones_ +_Type conversions_ -- `toString(): string` – colocará una cadena de decimales. +- `toString(): string` – prints to a decimal string. -_Matemática_ +_Math_ -- `plus(y: BigDecimal): BigDecimal` – puede escribirse como `x + y`. -- `minus(y: BigDecimal): BigDecimal` – puede escribirse como `x - y`. -- `times(y: BigDecimal): BigDecimal` – puede escribirse como `x * y`. -- `div(y: BigDecimal): BigDecimal` – puede escribirse como `x / y`. -- `equals(y: BigDecimal): bool` – puede escribirse como `x == y`. -- `notEqual(y: BigDecimal): bool` – puede escribirse como `x != y`. -- `lt(y: BigDecimal): bool` – puede escribirse como `x < y`. -- `lt(y: BigDecimal): bool` – puede escribirse como `x < y`. -- `gt(y: BigDecimal): bool` – puede escribirse como `x > y`. -- `ge(y: BigDecimal): bool` – puede escribirse como `x >= y`. -- `neg(): BigDecimal` - puede escribirse como `-x`. +- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. +- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. +- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. +- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. +- `equals(y: BigDecimal): bool` – can be written as `x == y`. +- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. +- `lt(y: BigDecimal): bool` – can be written as `x < y`. +- `le(y: BigDecimal): bool` – can be written as `x <= y`. +- `gt(y: BigDecimal): bool` – can be written as `x > y`. +- `ge(y: BigDecimal): bool` – can be written as `x >= y`. +- `neg(): BigDecimal` - can be written as `-x`. #### BigInt @@ -119,47 +119,47 @@ _Matemática_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` es usado para representar nuevos enteros grandes. Esto incluye valores de Ethereum similares a `uint32` hacia `uint256` y `int64` hacia `int256`. Todo por debajo de `uint32`. como el `int32`, `uint24` o `int8` se representa como `i32`. - -La clase `BigInt` tiene la siguiente API: - -_Construcción_ - -- `BigInt.fromI32(x: i32): BigInt` – creará un `BigInt` en base a un `i32`. -- `BigInt.fromString(s: string): BigInt`– Analizará un `BigInt` dentro de una cadena. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interpretará `bytes` sin firmar, o un little-endian entero. Si tu entrada es big-endian, deberás llamar primero el código `.reverse()`. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – interpretará los `bytes` como una firma, en un little-endian entero. Si tu entrada es big-endian, deberás llamar primero el código `.reverse()`. - - _Tipo de conversiones_ - -- `x.toHex(): string` - se transforma `BigInt` en un string de caracteres hexadecimales. -- `x.toString(): string` – se transforma `BigInt` en un string de numero decimal. -- `x.toI32(): i32` – retorna el `BigInt` como una `i32`; falla si el valor no encaja en `i32`. Es una buena idea comprobar primero `x.isI32()`. -- `x.toBigDecimal(): BigDecimal` - se convierte en un decimal sin parte fraccionaria. - -_Matemática_ - -- `x.plus(y: BigInt): BigInt` – puede ser escrito como `x + y`. -- `x.minus(y: BigInt): BigInt` – puede ser escrito como `x - y`. -- `x.times(y: BigInt): BigInt` – puede ser escrito como `x * y`. -- `x.div(y: BigInt): BigInt` – puede ser escrito como `x / y`. -- `x.mod(y: BigInt): BigInt` – puede ser escrito como `x % y`. -- `x.equals(y: BigInt): bool` – puede ser escrito como `x == y`. -- `x.notEqual(y: BigInt): bool` – puede ser escrito como `x != y`. -- `x.lt(y: BigInt): bool` – puede ser escrito como `x < y`. -- `x.le(y: BigInt): bool` – puede ser escrito como `x <= y`. -- `x.gt(y: BigInt): bool` – puede ser escrito como `x > y`. -- `x.ge(y: BigInt): bool` – puede ser escrito como `x >= y`. -- `x.neg(): BigInt` – puede ser escrito como `-x`. -- `x.divDecimal(y: BigDecimal): BigDecimal` – divide por un decimal, dando un resultado decimal. -- `x.isZero(): bool` – Conveniencia para comprobar si el número es cero. -- `x.isI32(): bool` – Comprueba si el número encaja en un `i32`. -- `x.abs(): BigInt` –Valor absoluto. -- `x.pow(exp: u8): BigInt` – Exponenciación. -- `bitOr(x: BigInt, y: BigInt): BigInt` puede ser escrito como `x | y`. -- `bitAnd(x: BigInt, y: BigInt): BigInt` – puede ser escrito como `x & y`. -- `leftShift(x: BigInt, bits: u8): BigInt` – puede ser escrito como `x << y`. -- `rightShift(x: BigInt, bits: u8): BigInt` – puede ser escrito como `x >> y`. +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. + +The `BigInt` class has the following API: + +_Construction_ + +- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. + + _Type conversions_ + +- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. +- `x.toString(): string` – turns `BigInt` into a decimal number string. +- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. +- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. + +_Math_ + +- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. +- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. +- `x.times(y: BigInt): BigInt` – can be written as `x * y`. +- `x.div(y: BigInt): BigInt` – can be written as `x / y`. +- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. +- `x.equals(y: BigInt): bool` – can be written as `x == y`. +- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. +- `x.lt(y: BigInt): bool` – can be written as `x < y`. +- `x.le(y: BigInt): bool` – can be written as `x <= y`. +- `x.gt(y: BigInt): bool` – can be written as `x > y`. +- `x.ge(y: BigInt): bool` – can be written as `x >= y`. +- `x.neg(): BigInt` – can be written as `-x`. +- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. +- `x.isZero(): bool` – Convenience for checking if the number is zero. +- `x.isI32(): bool` – Check if the number fits in an `i32`. +- `x.abs(): BigInt` – Absolute value. +- `x.pow(exp: u8): BigInt` – Exponentiation. +- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. +- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. +- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. #### TypedMap @@ -167,15 +167,15 @@ _Matemática_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap` puede utilizarse para almacenar pares clave-valor. Mira [este ejemplo](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). -La `TypedMap` clase tiene la siguiente API: +The `TypedMap` class has the following API: -- `new TypedMap()` – crea un mapa vacio con claves del tipo `K` y valores del tipo `T` -- `map.set(key: K, value: V): void` – establece el valor del `key` a `value` -- `map.getEntry(key: K): TypedMapEntry | null` – devuelve el par clave-valor de un `key` o `null` si el `key` no existe en el mapa -- `map.get(key: K): V | null` – returna el valor de una `key` o `null` si el `key` no existen en el mapa -- `map.isSet(key: K): bool` – returna `true` si el `key` existe en el mapa y `false` no es asi +- `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` +- `map.set(key: K, value: V): void` – sets the value of `key` to `value` +- `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map +- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map +- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not #### Bytes @@ -183,13 +183,13 @@ La `TypedMap` clase tiene la siguiente API: import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes` se utiliza para representar matrices de bytes de longitud arbitraria. Esto incluye los valores de Ethereum de tipo `bytes`, `bytes32` etc. +`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. -La clase `Bytes` extiende AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) y esto soporta todas las `Uint8Array` funcionalidades, mas los siguientes nuevos metodos: +The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: -- `b.toHex()` - devuelve un string hexadecimal que representa los bytes de la matriz -- `b.toString()` – convierte los bytes de la matriz en un string de caracteres unicode -- `b.toBase58()` –convierte un valor de Ethereum Bytes en codificación base58 (utilizada para los hashes IPFS) +- `b.toHex()` – returns a hexadecimal string representing the bytes in the array +- `b.toString()` – converts the bytes in the array to a string of unicode characters +- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) #### Address @@ -197,11 +197,11 @@ La clase `Bytes` extiende AssemblyScript's [Uint8Array](https://github.com/Assem import { Address } from '@graphprotocol/graph-ts' ``` -`Address` extiende `Bytes` para representar valores de Ethereum `address`. +`Address` extends `Bytes` to represent Ethereum `address` values. -Agrega el siguiente método sobre la API `Bytes`: +It adds the following method on top of the `Bytes` API: -- `Address.fromString(s: string): Address` – crea un `Address` desde un string hexadecimal +- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string ### Store API @@ -209,13 +209,13 @@ Agrega el siguiente método sobre la API `Bytes`: import { store } from '@graphprotocol/graph-ts' ``` -La API `store` permite cargar, guardar y eliminar entidades desde y hacia el almacén de Graph Node. +The `store` API allows to load, save and remove entities from and to the Graph Node store. -Las entidades escritas en el almacén se asignan uno a uno con los tipos `@entity` definidos en el esquema GraphQL del subgrafo. Para hacer que el trabajo con estas entidades sea conveniente, el comando `graph codegen` provisto por el [Graph CLI](https://github.com/graphprotocol/graph-cli) genera clases de entidades, que son subclases del tipo construido `Entity`, con captadores y seteadores de propiedades para los campos del esquema, así como métodos para cargar y guardar estas entidades. +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. -#### Creacion de entidades +#### Creating entities -El siguiente es un patrón común para crear entidades a partir de eventos de Ethereum. +The following is a common pattern for creating entities from Ethereum events. ```typescript // Import the Transfer event class generated from the ERC20 ABI @@ -241,13 +241,13 @@ export function handleTransfer(event: TransferEvent): void { } ``` -Cuando un evento `Transfer` es encontrado mientras se procesa la cadena, es pasado al evento handler `handleTransfer` usando el tipo generado `Transfer` (con el alias de `TransferEvent` aquí para evitar un conflicto de nombres con el tipo de entidad). Este tipo permite acceder a datos como la transacción parent del evento y sus parámetros. +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Cada entidad debe tener un ID único para evitar colisiones con otras entidades. Es bastante común que los parámetros de los eventos incluyan un identificador único que pueda ser utilizado. Nota: El uso del hash de la transacción como ID asume que ningún otro evento en la misma transacción crea entidades con este hash como ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. -#### Carga de entidades desde el almacén +#### Loading entities from the store -Si una entidad ya existe, se puede cargar desde el almacén con lo siguiente: +If an entity already exists, it can be loaded from the store with the following: ```typescript let id = event.transaction.hash.toHex() // or however the ID is constructed @@ -259,18 +259,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -Como la entidad puede no existir todavía en el almacén, el `load` metodo returna al valor del tipo `Transfer | null`. Por lo tanto, puede ser necesario comprobar el caso `null` antes de utilizar el valor. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. -> **Nota:** La carga de entidades sólo es necesaria si los cambios realizados en la asignación dependen de los datos anteriores de una entidad. Mira en la siguiente sección las dos formas de actualizar las entidades existentes. +> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. -#### Actualización de las entidades existentes +#### Updating existing entities -Hay dos maneras de actualizar una entidad existente: +There are two ways to update an existing entity: -1. Cargar la entidad con, por ejemplo `Transfer.load(id)`, establecer propiedades en la entidad, entonces `.save()` de nuevo en el almacen. -2. Simplemente crear una entidad con, por ejemplo `new Transfer(id)`, establecer las propiedades en la entidad, luego `.save()` en el almacen. Si la entidad ya existe, los cambios se fusionan con ella. +1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. -Cambiar las propiedades es sencillo en la mayoría de los casos, gracias a los seteadores de propiedades generados: +Changing properties is straight forward in most cases, thanks to the generated property setters: ```typescript let transfer = new Transfer(id) @@ -279,16 +279,16 @@ transfer.to = ... transfer.amount = ... ``` -También es posible desajustar las propiedades con una de las dos instrucciones siguientes: +It is also possible to unset properties with one of the following two instructions: ```typescript transfer.from.unset() transfer.from = null ``` -Esto sólo funciona con propiedades opcionales, es decir, propiedades que se declaran sin un `!` en GraphQL. Dos ejemplos serian `owner: Bytes` o `amount: BigInt`. +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. -La actualización de las propiedades de la matriz es un poco más complicada, ya que al obtener una matriz de una entidad se crea una copia de esa matriz. Esto significa que las propiedades de la matriz tienen que ser establecidas de nuevo explícitamente después de cambiar la matriz. El siguiente asume `entity` tiene un `numbers: [BigInt!]!` campo. +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. ```typescript // This won't work @@ -302,9 +302,9 @@ entity.numbers = numbers entity.save() ``` -#### Eliminar entidades del almacen +#### Removing entities from the store -Actualmente no hay forma de remover una entidad a través de los tipos generados. En cambio, para remover una entidad es necesario pasar el nombre del tipo de entidad y el ID de la misma a `store.remove`: +There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' @@ -313,17 +313,17 @@ let id = event.transaction.hash.toHex() store.remove('Transfer', id) ``` -### API de Ethereum +### Ethereum API -La API de Ethereum proporciona acceso a los contratos inteligentes, a las variables de estado públicas, a las funciones de los contratos, a los eventos, a las transacciones, a los bloques y a la codificación/decodificación de los datos de Ethereum. +The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. -#### Compatibilidad con los tipos de Ethereum +#### Support for Ethereum Types -Al igual que con las entidades, `graph codegen` genera clases para todos los contratos inteligentes y eventos utilizados en un subgrafo. Para ello, los ABIs del contrato deben formar parte de la fuente de datos en el manifiesto del subgrafo. Normalmente, los archivos ABI se almacenan en una carpeta `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -Con las clases generadas, las conversiones entre los tipos de Ethereum y los [built-in types](#built-in-types) tienen lugar detras de escena para que los autores de los subgrafos no tengan que preocuparse por ellos. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. -El siguiente ejemplo lo ilustra. Dado un esquema de subgrafos como +The following example illustrates this. Given a subgraph schema like ```graphql type Transfer @entity { @@ -333,7 +333,7 @@ type Transfer @entity { } ``` -y un `Transfer(address,address,uint256)` evento firmado en Ethereum, los valores `from`, `to` y `amount` del tipo `address`, `address` y `uint256` se convierten en `Address` y `BigInt`, permitiendo que se transmitan al `Bytes!` y `BigInt!` las propiedades de la `Transfer` entidad: +and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: ```typescript let id = event.transaction.hash.toHex() @@ -344,9 +344,9 @@ transfer.amount = event.params.amount transfer.save() ``` -#### Eventos y datos de bloques/transacciones +#### Events and Block/Transaction Data -Los eventos de Ethereum pasados a los manejadores de eventos, como el evento `Transfer` de los ejemplos anteriores, no sólo proporcionan acceso a los parámetros del evento, sino también a su transacción parent y al bloque del que forman parte. Los siguientes datos pueden ser obtenidos desde las instancias de `event` (estas clases forman parte del módulo `ethereum` en `graph-ts`): +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): ```typescript class Event { @@ -390,11 +390,11 @@ class Transaction { } ``` -#### Acceso al Estado del Contrato Inteligente +#### Access to Smart Contract State -El código generado por `graph codegen` también incluye clases para los contratos inteligentes utilizados en el subgrafo. Se pueden utilizar para acceder a variables de estado públicas y llamar a funciones del contrato en el bloque actual. +The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. -Un patrón común es acceder al contrato desde el que se origina un evento. Esto se consigue con el siguiente código: +A common pattern is to access the contract from which an event originates. This is achieved with the following code: ```typescript // Import the generated contract class @@ -411,13 +411,13 @@ export function handleTransfer(event: Transfer) { } ``` -Mientras el `ERC20Contract` en Ethereum tenga una función pública de sólo lectura llamada `symbol`, se puede llamar con `.symbol()`. Para las variables de estado públicas se crea automáticamente un método con el mismo nombre. +As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Cualquier otro contrato que forme parte del subgrafo puede ser importado desde el código generado y puede ser vinculado a una dirección válida. +Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. -#### Tratamiento de las llamadas revertidas +#### Handling Reverted Calls -Si los métodos de sólo lectura de tu contrato pueden revertirse, entonces debes manejar eso llamando al método del contrato generado prefijado con `try_`. Por ejemplo, el contrato Gravity expone el método `gravatarToOwner`. Este código sería capaz de manejar una reversión en ese método: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -429,11 +429,11 @@ if (callResult.reverted) { } ``` -Ten en cuenta que un nodo Graph conectado a un cliente Geth o Infura puede no detectar todas las reversiones, si confías en esto te recomendamos que utilices un nodo Graph conectado a un cliente Parity. +Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. -#### Codificación/Descodificación ABI +#### Encoding/Decoding ABI -Los datos pueden codificarse y descodificarse de acuerdo con el formato de codificación ABI de Ethereum utilizando las funciones `encode` y `decode` en el modulo `ethereum`. +Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -450,39 +450,39 @@ let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` -Para mas informacion: +For more information: - [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) - Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) - More [complex example](https://github.com/graphprotocol/graph-node/blob/6a7806cc465949ebb9e5b8269eeb763857797efc/tests/integration-tests/host-exports/src/mapping.ts#L72). -### API de Registro +### Logging API ```typescript import { log } from '@graphprotocol/graph-ts' ``` -La API `log` permite a los subgrafos registrar información en la salida estándar del Graph Node así como en Graph Explorer. Los mensajes pueden ser registrados utilizando diferentes niveles de registro. Se proporciona una sintaxis de string de formato básico para componer los mensajes de registro a partir del argumento. +The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. -La API `log` incluye las siguientes funciones: +The `log` API includes the following functions: -- `log.debug(fmt: string, args: Array): void` - registra un mensaje de depuración. -- `log.info(fmt: string, args: Array): void` - registra un mensaje informativo. -- `log.warning(fmt: string, args: Array): void` - registra una advertencia. -- `log.error(fmt: string, args: Array): void` - registra un error de mensaje. -- `log.critical(fmt: string, args: Array): void` – registra un mensaje critico _y_ termina el subgrafo. +- `log.debug(fmt: string, args: Array): void` - logs a debug message. +- `log.info(fmt: string, args: Array): void` - logs an informational message. +- `log.warning(fmt: string, args: Array): void` - logs a warning. +- `log.error(fmt: string, args: Array): void` - logs an error message. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. -La API `log` toma un formato string y una matriz de valores de string. A continuación, sustituye los marcadores de posición por los valores de string de la matriz. El primer `{}` marcador de posición se sustituye por el primer valor de la matriz, el segundo marcador de posición `{}` se sustituye por el segundo valor y así sucesivamente. +The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) ``` -#### Registro de uno o varios valores +#### Logging one or more values -##### Registro de un valor +##### Logging a single value -En el siguiente ejemplo, el valor del string "A" se pasa a una matriz para convertirse en`['A']` antes de ser registrado: +In the example below, the string value "A" is passed into an array to become`['A']` before being logged: ```typescript let myValue = 'A' @@ -493,9 +493,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Registro de una sola entrada de una matriz existente +##### Logging a single entry from an existing array -En el ejemplo siguiente, sólo se registra el primer valor de la matriz de argumentos, a pesar de que la matriz contiene tres valores. +In the example below, only the first value of the argument array is logged, despite the array containing three values. ```typescript let myArray = ['A', 'B', 'C'] @@ -506,9 +506,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -#### Registro de múltiples entradas de una matriz existente +#### Logging multiple entries from an existing array -Cada entrada de la matriz de argumentos requiere su propio marcador de posición `{}` en el string del mensaje de registro. El siguiente ejemplo contiene tres marcadores de posición `{}` en el mensaje de registro. Debido a esto, los tres valores de `myArray` se registran. +Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. ```typescript let myArray = ['A', 'B', 'C'] @@ -519,9 +519,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Registro de una entrada específica de una matriz existente +##### Logging a specific entry from an existing array -Para mostrar un valor específico en la matriz, se debe proporcionar el valor indexado. +To display a specific value in the array, the indexed value must be provided. ```typescript export function handleSomeEvent(event: SomeEvent): void { @@ -530,9 +530,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Registro de información de eventos +##### Logging event information -El ejemplo siguiente registra el número de bloque, el hash de bloque y el hash de transacción de un evento: +The example below logs the block number, block hash and transaction hash from an event: ```typescript import { log } from '@graphprotocol/graph-ts' @@ -546,15 +546,15 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -### API IPFS +### IPFS API ```typescript import { ipfs } from '@graphprotocol/graph-ts' ``` -Los contratos inteligentes anclan ocasionalmente archivos IPFS en la cadena. Esto permite que las asignaciones obtengan los hashes de IPFS del contrato y lean los archivos correspondientes de IPFS. Los datos del archivo se devolverán en forma de `Bytes`, lo que normalmente requiere un procesamiento posterior, por ejemplo con la API `json` documentada más adelante en esta página. +Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. -Dado un hash o ruta de IPFS, la lectura de un archivo desde IPFS se realiza de la siguiente manera: +Given an IPFS hash or path, reading a file from IPFS is done as follows: ```typescript // Put this inside an event handler in the mapping @@ -567,9 +567,9 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Nota:** `ipfs.cat` no es deterministico en este momento. Si no se puede recuperar el archivo a través de la red IPFS antes de que se agote el tiempo de la solicitud, devolverá `null`. Debido a esto, siempre vale la pena comprobar el resultado para `null`. Para asegurar que los archivos puedan ser recuperados, tienen que estar anclados al nodo IPFS al que se conecta Graph Node. En el [servicio de host](https://thegraph.com/hosted-service), esto es [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). Mira la seccion [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) para mayor informacion. +**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. -También es posible procesar archivos de mayor tamaño en streaming con `ipfs.map`. La función espera el hash o la ruta de un archivo IPFS, el nombre de una llamada de retorno y banderas para modificar su comportamiento: +It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -599,34 +599,34 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -La única bandera que se admite actualmente es `json`, que debe ser pasada por `ipfs.map`. Con la bandera `json`, el archivo IPFS debe consistir en una serie de valores JSON, un valor por línea. La llamada a `ipfs.map` leerá cada línea del archivo, la deserializará en un `JSONValue` y llamará a la llamada de retorno para cada una de ellas. El callback puede entonces utilizar operaciones de entidad para almacenar los datos del `JSONValue`. Los cambios de entidad se almacenan sólo cuando el manejador que llamó `ipfs.map` termina con éxito; mientras tanto, se mantienen en la memoria, y el tamaño del archivo que `ipfs.map` puede procesar es, por lo tanto, limitado. +The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -Si es exitoso, `ipfs.map` retorna `void`. Si alguna invocación de la devolución de llamada causa un error, el manejador que invocó `ipfs.map` es abortado, y el subgrafo es marcado como fallido. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. -### API Cripto +### Crypto API ```typescript import { crypto } from '@graphprotocol/graph-ts' ``` -La API `crypto` pone a disposición de los usuarios funciones criptográficas para su uso en mapeos. En este momento, sólo hay una: +The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: - `crypto.keccak256(input: ByteArray): ByteArray` -### API JSON +### JSON API ```typescript import { json, JSONValueKind } from '@graphprotocol/graph-ts' ``` -Los datos JSON pueden ser analizados usando la API `json`: +JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – analiza datos JSON desde una matriz `Bytes` -- `json.try_fromBytes(data: Bytes): Result` – version segura de `json.fromBytes`, devuelve una variante de error si el análisis falla -- `json.fromString(data: Bytes): JSONValue` – analiza datos de JSON desde un valido UTF-8 `String` -- `json.try_fromString(data: Bytes): Result` – version segura de `json.fromString`, devuelve una variante de error si el analisis falla +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence +- `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed +- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -La `JSONValue` clase proporciona una forma de extraer valores de un documento JSON arbitrario. Como los valores JSON pueden ser booleans, números, matrices y más, `JSONValue` viene con una propiedad `kind` para comprobar el tipo de un valor: +The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: ```typescript let value = json.fromBytes(...) @@ -635,22 +635,22 @@ if (value.kind == JSONValueKind.BOOL) { } ``` -Además, hay un método para comprobar si el valor es `null`: +In addition, there is a method to check if the value is `null`: - `value.isNull(): boolean` -Cuando el tipo de un valor es cierto, se puede convertir a un [built-in type](#built-in-types) utilizando uno de los siguientes métodos: +When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: - `value.toBool(): boolean` - `value.toI64(): i64` - `value.toF64(): f64` - `value.toBigInt(): BigInt` - `value.toString(): string` -- `value.toArray(): Array` -(y luego convierte `JSONValue` con uno de los 5 metodos anteriores) +- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) -### Referencias de Tipo de Conversiones +### Type Conversions Reference -| Origen(es) | Destino | Funcion de Conversion | +| Source(s) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | | Address | Bytes | none | | Address | ID | s.toHexString() | @@ -688,17 +688,17 @@ Cuando el tipo de un valor es cierto, se puede convertir a un [built-in type](#b | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### Metadatos de la Fuente de Datos +### Data Source Metadata -Puedes inspeccionar la dirección del contrato, la red y el contexto de la fuente de datos que invocó el manejador a través del namespaces `dataSource`: +You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### Entity y DataSourceContext +### Entity and DataSourceContext -La clase base `Entity` y la clase hija `DataSourceContext` tienen ayudantes para establecer y obtener campos dinámicamente: +The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From ccf531fa9e25bbf1e1a93b2cb474902d519bbeae Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:30 -0500 Subject: [PATCH 014/241] New translations introduction.mdx (Japanese) --- pages/ja/about/introduction.mdx | 48 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/ja/about/introduction.mdx b/pages/ja/about/introduction.mdx index 2e8e73072b4b..5f840c040400 100644 --- a/pages/ja/about/introduction.mdx +++ b/pages/ja/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: イントロダクション +title: Introduction --- -このページでは、「The Graph」とは何か、どのようにして始めるのかを説明します。 +This page will explain what The Graph is and how you can get started. -## The Graph とは +## What The Graph Is -The Graph は、Ethereum をはじめとするブロックチェーンのデータをインデックス化してクエリするための分散型プロトコルです。 これにより、直接クエリすることが困難のデータのクエリが容易に可能になります。 +The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. -[Uniswap](https://uniswap.org/)のような複雑なスマートコントラクトを持つプロジェクトや、[Bored Ape Yacht Club](https://boredapeyachtclub.com/) のような NFT の取り組みでは、Ethereum のブロックチェーンにデータを保存しているため、基本的なデータ以外をブロックチェーンから直接読み取ることは実に困難です。 +Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. -Bored Ape Yacht Club の場合、ある Ape の所有者を取得したり、ID に基づいて Ape のコンテンツ URI を取得したり、総供給量を取得したりといった基本的な読み取り操作は、 [スマートコントラクト](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) に直接プログラムされているので実行できますが、集約、検索、連携、フィルタリングなど、より高度な実世界のクエリや操作はできません。 例えば、あるアドレスが所有している NFT をクエリし、その特徴の 1 つでフィルタリングしたいと思っても、コントラクト自体と直接やりとりしてその情報を得ることはできません。 +In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. -このデータを得るためには、これまでに発行されたすべての [`転送`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) イベントを処理し、トークン ID と IPFS ハッシュを使って IPFS からメタデータを読み取り、それを集約する必要があります。 このような比較的簡単な質問であっても、ブラウザ上で動作する分散型アプリケーション(dapp)が回答を得るには**数時間から数日**かかるでしょう。 +To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. -また、独自のサーバーを構築し、そこでトランザクションを処理してデータベースに保存し、その上にデータを照会するための API エンドポイントを構築することもできます。 しかし、この方法はリソースを必要とし、メンテナンスが必要で、単一障害点となり、分散化に必要な重要なセキュリティ特性を壊してしまいます。 +You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. -**ブロックチェーンデータのインデックス作成は非常に困難です。** +**Indexing blockchain data is really, really hard.** -フィナリティ、チェーンの再編成、アンクルドブロックなどのブロックチェーンの特性は、このプロセスをさらに複雑にし、ブロックチェーンデータから正しいクエリ結果を取り出すことは、時間がかかるだけでなく、概念的にも困難です。 +Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. -The Graph は、ブロックチェーンデータにインデックスを付けて、パフォーマンスの高い効率的なクエリを可能にする分散型プロトコルでこれを解決します。 そして、これらの API(インデックス化された「サブグラフ」)は、標準的な GraphQL API でクエリを行うことができます。 現在、同じ機能を持つホスト型のサービスと、分散型のプロトコルがあります。 どちらも、オープンソースで実装されている [Graph Node](https://github.com/graphprotocol/graph-node).によって支えられています。 +The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). -## The Graph の仕組み +## How The Graph Works -The Graph は、サブグラフマニフェストと呼ばれるサブグラフ記述に基づいて、Ethereum のデータに何をどのようにインデックスするかを学習します。 サブグラフマニフェストは、そのサブグラフで注目すべきスマートコントラクト、注目すべきコントラクト内のイベント、イベントデータと The Graph がデータベースに格納するデータとのマッピング方法などを定義します。 +The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. -`サブグラフのマニフェスト`を書いたら、グラフの CLI を使ってその定義を IPFS に保存し、インデクサーにそのサブグラフのデータのインデックス作成を開始するように指示します。 +Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. -この図では、サブグラフ・マニフェストがデプロイされた後のデータの流れについて、Ethereum のトランザクションを扱って詳しく説明しています。 +This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: ![](/img/graph-dataflow.png) -フローは以下のステップに従います。 +The flow follows these steps: -1. 分散型アプリケーションは、スマートコントラクトのトランザクションを介して Ethereum にデータを追加します。 -2. スマートコントラクトは、トランザクションの処理中に 1 つまたは複数のイベントを発行します。 -3. Graph Node は、Ethereum の新しいブロックと、それに含まれる自分のサブグラフのデータを継続的にスキャンします。 -4. Graph Node は、これらのブロックの中からあなたのサブグラフの Ethereum イベントを見つけ出し、あなたが提供したマッピングハンドラーを実行します。 マッピングとは、イーサリアムのイベントに対応して Graph Node が保存するデータエンティティを作成または更新する WASM モジュールのことです。 -5. 分散型アプリケーションは、ノードの[GraphQL エンドポイント](https://graphql.org/learn/).を使って、ブロックチェーンからインデックスされたデータを Graph Node にクエリします。 Graph Node は、GraphQL のクエリを、基盤となるデータストアに対するクエリに変換し、ストアのインデックス機能を利用してデータを取得します。 分散型アプリケーションは、このデータをエンドユーザー向けのリッチな UI に表示し、エンドユーザーはこれを使って Ethereum 上で新しいトランザクションを発行します。 このサイクルが繰り返されます。 +1. A decentralized application adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. -## 次のステップ +## Next Steps -次のセクションでは、サブグラフを定義する方法、サブグラフをデプロイする方法、Graph Node が構築したインデックスからデータをクエリする方法について、さらに詳しく説明します。 +In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. -独自のサブグラフを書き始める前に、グラフエクスプローラを見て、既にデプロイされているサブグラフをいくつか見てみるといいでしょう。 各サブグラフのページには、そのサブグラフのデータを GraphQL でクエリするためのプレイグラウンドが用意されています。 +Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. From 5b124c870fc0614c65f78b9837248fba0a01ee28 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:31 -0500 Subject: [PATCH 015/241] New translations introduction.mdx (Korean) --- pages/ko/about/introduction.mdx | 48 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/ko/about/introduction.mdx b/pages/ko/about/introduction.mdx index f401d4070f1f..5f840c040400 100644 --- a/pages/ko/about/introduction.mdx +++ b/pages/ko/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: 소개 +title: Introduction --- -이 페이지는 더 그래프가 무엇이며, 여러분들이 시작하는 방법에 대해 설명합니다. +This page will explain what The Graph is and how you can get started. -## 더 그래프란 무엇인가? +## What The Graph Is -더 그래프는 이더리움으로부터 시작한 블록체인 데이터를 인덱싱하고 쿼리하기 위한 분산형 프로토콜입니다. 이는 직접 쿼리하기 어려운 데이터 쿼리를 가능하게 해줍니다. +The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. -[유니스왑](https://uniswap.org/) 처럼 복잡한 스마트 컨트렉트를 구현하는 프로젝트나 [Bored Ape Yacht Club](https://boredapeyachtclub.com/)과 같은 NFT 이니셔티브들은 이더리움 블록체인에 데이터를 저장하기 때문에, 블록체인의 기본 데이터 외에는 직접적으로 읽기가 매우 어렵습니다. +Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. -Bored Ape Yacht Club의 경우에 우리는 [해당 컨트렉트](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) 에서 특정 유인원의 주인을 확인하거나, 그들의 ID를 기반으로 Ape의 콘텐츠 URI를 확인하거나, 혹은 총 공급량을 확인하는 등의 기본적인 읽기 작업을 수행할 수 있습니다. 이는 이러한 읽기 작업이 스마트 컨트렉트에 직접적으로 프로그래밍 되었기 때문에 가능하지만, 집계, 검색, 관계 및 단순하지 않은 필터링과 같은 더 고급 적인 실생활 쿼리 및 작업은 불가능합니다. 예를 들어 여러분들이 특정 주소가 소유한 유인원을 쿼리하고, 그 특성 중 하나로 필터링하고자 하는 경우, 우리는 해당 컨트렉트 자체와 직접 상호 작용하여 해당 정보를 얻을 수 없습니다. +In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. -이러한 데이터를 얻기 위해서, 여러분들은 아마 그동안 발생한 모든 단일 [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) 이벤트 들을 모두 처리하고, 토큰 ID와 IPFS 해시를 사용하여 IPFS로부터 메타데이터를 읽은 후 이들을 집계해야 합니다. 이러한 유형의 비교적 간단한 쿼리에 대해서도, 아마 브라우저에서 실행되는 탈중앙화 애필리케이션(dapp)은 답을 얻기 위해 **몇 시간 혹은 며칠**이 걸릴 수도 있습니다. +To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. -또한 여러분들은 데이터를 쿼리하기 위해 자체 서버를 구축하고, 그곳에서 트랜잭션을 처리하고, 데이터베이스에 저장하고, 그 위에 API 엔드포인트를 구축할 수도 있습니다. 하지만 이 옵션은 많은 리소스를 사용하고, 유지 관리가 필요하며, 단일 실패 지점을 제공하고 또한 탈중앙화에 필수적인 중요한 보안 속성을 손상시킵니다. +You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. -**블록체인 데이터를 인덱싱하는 것은 정말로, 정말로 어렵습니다.** +**Indexing blockchain data is really, really hard.** -최종성, 체인 재구성 또는 언클 블록과 같은 블록체인 속성들은 이 프로세스를 더욱 복잡하게 만들고, 블록체인 데이터에서 정확한 쿼리 결과가 검색되도록 하기 위해 많은 시간이 소요될 뿐만 아니라 개념적으로도 어렵게 만듭니다. +Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. -더 그래프는 블록체인 데이터를 인덱싱하고 효율적이고 효과적인 쿼리를 가능하게 하는 분산형 프로토콜로 이를 해결합니다. 이러한 API(인덱싱된 "서브그래프")들을 표준 GraphQL API로 쿼리할 수 있습니다. 오늘날, 호스팅 서비스와 동일한 기능을 가진 탈중앙화 프로토콜이 존재합니다. 둘 다 [Graph Node](https://github.com/graphprotocol/graph-node)의 오픈소스 구현에 의해 뒷받침 됩니다. +The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). -## 더 그래프의 작동 방식 +## How The Graph Works -더 그래프는 서브 매니페스트라고 하는 서브그래프 설명을 기반으로 이더리움 데이터를 인덱싱하는 항목과 방법을 학습합니다. 서브그래프 설명은 서브그래프에 대한 스마트 컨트렉트, 주의를 기울여야 할 컨트렉트들의 이벤트 및 더 그래프가 데이터베이스에 저장할 데이터에 이벤트 데이터를 매핑하는 방법을 정의합니다. +The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. -여러분들이 `subgraph manifest`를 작성한 후에 , Graph CLI를 사용하여 IPFS에 정의를 저장하고 인덱서에게 해당 서브그래프에 대한 데이터 인덱싱을 시작하도록 지시합니다. +Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. -이 다이어그램은 이더리움 트랜잭션을 처리하는 서브그래프 매니페스트가 배포된 후 데이터 흐름에 대한 자세한 정보를 제공합니다. +This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: ![](/img/graph-dataflow.png) -해당 플로우는 다음 단계를 따릅니다 : +The flow follows these steps: -1. 탈중앙화 애플리케이션은 스마트 컨트렉트의 트랜잭션을 통해 이더리움에 데이터를 추가합니다. -2. 스마트 컨트렉트는 트랜잭션을 처리하는 동안 하나 이상의 이벤트를 발생시킵니다. -3. 그래프 노드는 이더리움에서 새 블록들과 해당 블록들에 포함될 수 있는 서브그래프 데이터를 지속적으로 검색합니다. -4. 그래프 노드는 이러한 블록에서 서브그래프에 대한 이더리움 이벤트를 찾고 사용자가 제공한 매핑 핸들러를 실행합니다. 매핑은 이더리움 이벤트들에 대응해 그래프 노드가 저장하는 데이터 엔티티들을 생성하거나 업데이트하는 WASM 모듈입니다. -5. 탈중앙화 애플리케이션은 노드의 [GraphQL endpoint](https://graphql.org/learn/)를 사용하여 블록체인에서 인덱싱된 데이터를 위해 그래프 노드를 쿼리합니다. 더 그래프 노드는 GraphQL 쿼리를 기본 데이터 저장소에 대한 쿼리로 변환하여 이 데이터를 가져오고 저장소의 인덱싱 기능들을 활용합니다. 분산형 애플리케이션은 최종 사용자를 위해 이더리움에서 새로운 트랜잭션을 발생시킬 때 사용하는 풍부한 UI로 이 데이터를 표시합니다. 이 싸이클이 반복됩니다. +1. A decentralized application adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. -## 다음 단계 +## Next Steps -다음 섹션에서 우리는 서브그래프를 정의하는 방법, 배포하는 방법 및 그래프 노드가 구축하는 인덱스들로부터 데이터를 쿼리하는 방법에 대해 더 자세히 알아볼 것입니다. +In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. -자체 서브그래프를 작성하기 전에, 여러분들은 그래프 탐색기를 살펴보고 이미 배포된 일부 서브 그래프들에 대해 알아보길 희망하실 수 있습니다. 각 서브 그래프 페이지에는 여러분들이 GraphQL로 서브그래프의 데이터를 쿼리할 수 있는 영역이 포함되어 있습니다. +Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. From 25bdd2f08e43171bae9a3335a712c065e09d16bd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:32 -0500 Subject: [PATCH 016/241] New translations introduction.mdx (Chinese Simplified) --- pages/zh/about/introduction.mdx | 50 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 26 deletions(-) diff --git a/pages/zh/about/introduction.mdx b/pages/zh/about/introduction.mdx index e4833d5e34be..5f840c040400 100644 --- a/pages/zh/about/introduction.mdx +++ b/pages/zh/about/introduction.mdx @@ -1,49 +1,47 @@ --- -title: 介绍 +title: Introduction --- -本页将解释什么是 The Graph,以及你如何开始。 +This page will explain what The Graph is and how you can get started. -## 什么是 The Graph +## What The Graph Is -The Graph 是一个去中心化的协议,用于索引和查询区块链的数据,首先是从以太坊开始的。 它使查询那些难以直接查询的数据成为可能。 +The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. -像 [Uniswap](https://uniswap.org/)这样具有复杂智能合约的项目,以及像 [Bored Ape Yacht Club](https://boredapeyachtclub.com/) 这样的 NFTs 倡议,都在以太坊区块链上存储数据,因此,除了直接从区块链上读取基本数据外,真的很难。 +Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. -在 Bored Ape Yacht Club 的案例中,我们可以对 [合约](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code)进行基本的读取操作,比如获得某个 Ape 的所有者,根据他们的 ID 获得某个 Ape 的内容 URI,或者总供应量,因为这些读取操作是直接编入智能合约的,但是更高级的现实世界的查询和操作,比如聚合、搜索、关系和非粗略的过滤是不可能的。 例如,如果我们想查询某个地址所拥有的 apes,并通过它的某个特征进行过滤,我们将无法通过直接与合约本身进行交互来获得该信息。 +In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. -为了获得这些数据,你必须处理曾经发出的每一个 [`传输`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) 事件,使用 Token ID 和 IPFS 的哈希值从 IPFS 读取元数据,然后将其汇总。 即使是这些类型的相对简单的问题,在浏览器中运行的去中心化应用程序(dapp)也需要**几个小时甚至几天** 才能得到答案。 +To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. -你也可以建立你自己的服务器,在那里处理交易,把它们保存到数据库,并在上面建立一个 API 终端,以便查询数据。 然而,这种选择是资源密集型的,需要维护,会出现单点故障,并破坏了去中心化化所需的重要安全属性。 +You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. -**为区块链数据编制索引真的非常非常难。** +**Indexing blockchain data is really, really hard.** -区块链的属性,如最终性、链重组或未封闭的区块,使这一过程进一步复杂化,并使从区块链数据中检索出正确的查询结果不仅耗时,而且在概念上也很难。 +Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. -The Graph 通过一个去中心化的协议解决了这一问题,该协议可以对区块链数据进行索引并实现高性能和高效率的查询。 这些 API(索引的 "子图")然后可以用标准的 GraphQL API 进行查询。 今天,有一个托管服务,也有一个具有相同功能的分去中心化协议。 两者都由 [](https://github.com/graphprotocol/graph-node)Graph Node +The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). -的开放源码实现支持。 +## How The Graph Works -## The Graph 是如何工作的 +The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. -Graph 根据子图描述(称为子图清单)来学习什么以及如何为以太坊数据建立索引。 子图描述定义了子图所关注的智能合约,这些合约中需要关注的事件,以及如何将事件数据映射到 The Graph 将存储在其数据库中的数据。 +Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. -一旦你写好了 `子图清单 `,你就可以使用 Graph CLI 将该定义存储在 IPFS 中,并告诉索引人开始为该子图编制索引数据。 - -这张图更详细地介绍了一旦部署了子图清单,处理以太坊交易的数据流。 +This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: ![](/img/graph-dataflow.png) -流程遵循这些步骤: +The flow follows these steps: -1. 一个去中心化的应用程序通过智能合约上的交易向以太坊添加数据。 -2. 智能合约在处理交易时,会发出一个或多个事件。 -3. Graph 节点不断扫描以太坊的新区块和它们可能包含的子图的数据。 -4. Graph 节点在这些区块中为你的子图找到 Ethereum 事件并运行你提供的映射处理程序。 映射是一个 WASM 模块,它创建或更新 Graph Node 存储的数据实体,以响应 Ethereum 事件。 -5. 去中心化的应用程序使用节点的[GraphQL 端点](https://graphql.org/learn/),从区块链的索引中查询 Graph 节点的数据。 Graph 节点反过来将 GraphQL 查询转化为对其底层数据存储的查询,以便利用存储的索引功能来获取这些数据。 去中心化的应用程序在一个丰富的用户界面中为终端用户显示这些数据,他们用这些数据在以太坊上发行新的交易。 就这样周而复始。 +1. A decentralized application adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. -## 下一步 +## Next Steps -在下面的章节中,我们将更详细地介绍如何定义子图,如何部署它们,以及如何从 Graph 节点建立的索引中查询数据。 +In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. -在你开始编写你自己的子图之前,你可能想看一下 Graph 浏览器,探索一些已经部署的子图。 每个子图的页面都包含一个操作面板,让你用 GraphQL 查询该子图的数据。 +Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. From 989964dbadde83f41100c5b684c5ab702b49c407 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:34 -0500 Subject: [PATCH 017/241] New translations network.mdx (Spanish) --- pages/es/about/network.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/es/about/network.mdx b/pages/es/about/network.mdx index a81e6ef93cbb..b19f08d12bc7 100644 --- a/pages/es/about/network.mdx +++ b/pages/es/about/network.mdx @@ -1,15 +1,15 @@ --- -title: Visión general de la red +title: Network Overview --- -The Graph Network es un protocolo de indexación descentralizado, el cual permite organizar los datos de la blockchain. Las aplicaciones utilizan GraphQL para consultar APIs públicas, llamadas subgrafos, que sirven para recuperar los datos que están indexados en la red. Con The Graph, los desarrolladores pueden construir sus aplicaciones completamente en una infraestructura pública. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. > GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## Descripción +## Overview -The Graph Network está formada por Indexadores, Curadores y Delegadores que proporcionan servicios a la red y proveen datos a las aplicaciones Web3. Los clientes utilizan estas aplicaciones y consumen los datos. +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. -![Economía de los tokens](/img/Network-roles@2x.png) +![Token Economics](/img/Network-roles@2x.png) -Para garantizar la seguridad económica de The Graph Network y la integridad de los datos que se consultan, los participantes colocan en staking sus Graph Tokens (GRT). GRT es un token alojado en el protocolo ERC-20 de la blockchain Ethereum, utilizado para asignar recursos en la red. Los Indexadores, Curadores y Delegadores pueden prestar sus servicios y obtener ingresos por medio de la red, en proporción a su desempeño y la cantidad de GRT que hayan colocado en staking. +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. From 6b6da91a5a4f0f3a6b579281c243d79d5ae459f6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:36 -0500 Subject: [PATCH 018/241] New translations network.mdx (Arabic) --- pages/ar/about/network.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/ar/about/network.mdx b/pages/ar/about/network.mdx index 7b0c538514ce..b19f08d12bc7 100644 --- a/pages/ar/about/network.mdx +++ b/pages/ar/about/network.mdx @@ -1,15 +1,15 @@ --- -title: نظرة عامة حول الشبكة +title: Network Overview --- -شبكة The Graph هو بروتوكول فهرسة لامركزي لتنظيم بيانات الـ blockchain. التطبيقات تستخدم GraphQL للاستعلام عن APIs المفتوحة والتي تسمى subgraphs ، لجلب البيانات المفهرسة على الشبكة. باستخدام The Graph ، يمكن للمطورين إنشاء تطبيقات بدون خادم تعمل بالكامل على البنية الأساسية العامة. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. -> عنوان GRT Token: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +> GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## نظره عامة +## Overview -شبكة TheGraph تتكون من مفهرسين (Indexers) ومنسقين (Curators) ومفوضين (Delegator) حيث يقدمون خدمات للشبكة ويقدمون البيانات لتطبيقات Web3. حيث يتم استخدام تلك التطبيقات والبيانات من قبل المستهلكين. +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. ![Token Economics](/img/Network-roles@2x.png) -لضمان الأمن الاقتصادي لشبكة The Graph وسلامة البيانات التي يتم الاستعلام عنها ، يقوم المشاركون بـ stake لـ Graph Tokens (GRT). GRT رمزه ERC-20 على Ethereum blockchain ، يستخدم لمحاصصة (allocate) الموارد في الشبكة. المفوضون والمنسقون والمفهرسون النشطون يقدمون الخدمات لذلك يمكنهم الحصول على عوائد من الشبكة ، بما يتناسب مع حجم العمل الذي يؤدونه وحصة GRT الخاصة بهم. +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. From ec651c7250eacbaab06186b1b93e16ef933cac52 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:36 -0500 Subject: [PATCH 019/241] New translations network.mdx (Japanese) --- pages/ja/about/network.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/ja/about/network.mdx b/pages/ja/about/network.mdx index 83f01727e162..b19f08d12bc7 100644 --- a/pages/ja/about/network.mdx +++ b/pages/ja/about/network.mdx @@ -1,15 +1,15 @@ --- -title: ネットワークの概要 +title: Network Overview --- -グラフネットワークは、ブロックチェーンデータを整理するための分散型インデックスプロトコルです。 アプリケーションはGraphQLを使ってサブグラフと呼ばれるオープンなAPIにクエリし、ネットワーク上にインデックスされているデータを取得します。 The Graphを使うことで、開発者は公共のインフラ上で実行されるサーバーレスアプリケーションを構築することができます。 +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. > GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## 概要 +## Overview -グラフネットワークは、インデクサー、キュレーター、デリゲーターにより構成され、ネットワークにサービスを提供し、Web3アプリケーションにデータを提供します。 消費者は、アプリケーションを利用し、データを消費します。 +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. -![トークンエコノミクス](/img/Network-roles@2x.png) +![Token Economics](/img/Network-roles@2x.png) -グラフネットワークの経済的な安全性と、クエリデータの完全性を確保するために、参加者はグラフトークン(GRT)をステークします。 GRTは、Ethereumブロックチェーン上でERC-20となっているワークトークンで、ネットワーク内のリソースを割り当てるために使用されます。 アクティブなインデクサー、キュレーター、デリゲーターはサービスを提供し、その作業量とGRTのステークに比例して、ネットワークから収入を得ることができます。 +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. From 36c218cac5fed6e67295000f475afd08ce4563a4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:38 -0500 Subject: [PATCH 020/241] New translations network.mdx (Korean) --- pages/ko/about/network.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/ko/about/network.mdx b/pages/ko/about/network.mdx index b7a6a6139801..b19f08d12bc7 100644 --- a/pages/ko/about/network.mdx +++ b/pages/ko/about/network.mdx @@ -1,15 +1,15 @@ --- -title: 네트워크 개요 +title: Network Overview --- -더 그래프 네트워크는 블록체인 데이터를 구성하기 위한 분산형 인덱싱 프로토콜입니다. 애플리케이션들은 GraphQL을 사용하여 서브그래프라고 하는 개방형 API를 쿼리하여 네트워크에서 인덱싱된 데이터를 검색합니다. 더 그래프를 사용하여 개발자는 완전히 범용 인프라에서 실행되는 서버리스 애플리케이션을 구축할 수 있습니다. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. -> GRT 토큰 주소: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +> GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## 개요 +## Overview -더 그래프 네트워크는 네트워크에 서비스를 제공하고 Web3 애플리케이션들에 데이터를 제공하는 인덱서, 큐레이터 및 위임자로 구성됩니다. 소비자는 애플리케이션을 사용하고 데이터를 소비합니다. +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. -![토큰 이코노믹스](/img/Network-roles@2x.png) +![Token Economics](/img/Network-roles@2x.png) -더 그래프 네트워크의 경제적 보안과 쿼리 되는 데이터의 무결성을 보장하기 위해 참여자들은 그래프 토큰(GRT)을 스테이킹하고 사용합니다. GRT는 이더리움 블록체인 상의 ERC-20 작업 토큰이며, 네트워크 내의 리소스들을 할당하는 데 사용됩니다. 활성 인덱서, 큐레이터 및 위임자는 수행하는 작업의 양과 GRT 지분에 비례하여 네트워크에 서비스를 제공하고 수익을 창출할 수 있습니다. +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. From 282c9e22a64d7a5c6a4f4d3bbf1d15153445b8b9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:39 -0500 Subject: [PATCH 021/241] New translations network.mdx (Chinese Simplified) --- pages/zh/about/network.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/zh/about/network.mdx b/pages/zh/about/network.mdx index 7cdb059d6279..b19f08d12bc7 100644 --- a/pages/zh/about/network.mdx +++ b/pages/zh/about/network.mdx @@ -1,15 +1,15 @@ --- -title: 网络概述 +title: Network Overview --- -The Graph网络是一个去中心化的索引协议,用于组织区块链数据。 应用程序使用GraphQL查询称为子图的开放API,以检索网络上的索引数据。 通过The Graph,开发者可以建立完全在公共基础设施上运行的无服务器应用程序。 +The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. -> Grt合约地址:[0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +> GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## 概述 +## Overview -The Graph网络由索引人、策展人和委托人组成,为网络提供服务,并为Web3应用程序提供数据。 消费者使用应用程序并消费数据。 +The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. -![代币经济学](/img/Network-roles@2x.png) +![Token Economics](/img/Network-roles@2x.png) -为了确保The Graph 网络的经济安全和被查询数据的完整性,参与者将Graph 令牌(GRT)质押并使用。 GRT是一种工作代币,是以太坊区块链上的ERC-20,用于分配网络中的资源。 活跃的索引人、策展人和委托人可以提供服务,并从网络中获得收入,与他们的工作量和他们的GRT委托量成正比。 +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. From 85688d0b7df8212600aa11cdc208a7492b3d6b20 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:40 -0500 Subject: [PATCH 022/241] New translations assemblyscript-api.mdx (Arabic) --- pages/ar/developer/assemblyscript-api.mdx | 331 +++++++++++----------- 1 file changed, 165 insertions(+), 166 deletions(-) diff --git a/pages/ar/developer/assemblyscript-api.mdx b/pages/ar/developer/assemblyscript-api.mdx index 8e73bb511c90..2afa431fe8c5 100644 --- a/pages/ar/developer/assemblyscript-api.mdx +++ b/pages/ar/developer/assemblyscript-api.mdx @@ -2,25 +2,25 @@ title: AssemblyScript API --- -> ملاحظة: إذا أنشأت رسمًا فرعيًا قبل إصدار `graph-cli` / `graph-ts` `0.22.0` ، فأنت تستخدم إصدارًا أقدم من AssemblyScript ، نوصي بإلقاء نظرة على [ `دليل الترحيل` ](/developer/assemblyscript-migration-guide) +> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) -هذه الصفحة توثق APIs المضمنة التي يمكن استخدامها عند كتابة subgraph mappings. Two kinds of APIs are available out of the box: +This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: -- مكتبة Graph TypeScript(`graph-ts`) -- كود تم إنشاؤه من ملفات الـ subgraph بواسطة `graph codegen`. +- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and +- code generated from subgraph files by `graph codegen`. -من الممكن أيضا إضافة مكتبات أخرى مثل dependencies، طالما أنها متوافقة مع [ AssemblyScript ](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. -## التثبيت +## Installation -الـ Subgraphs التي تم إنشاؤها باستخدام [ `graph init` ](/developer/create-subgraph-hosted) تأتي مع dependencies مكونة مسبقا. كل ما هو مطلوب لتثبيت هذه الـ dependencies هو تشغيل أحد الأوامر التالية: +Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: ```sh yarn install # Yarn npm install # NPM ``` -إذا تم إنشاء الـ subgraph من البداية ، فسيقوم أحد الأمرين التاليين بتثبيت مكتبة Graph TypeScript كـ dependency: +If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: ```sh yarn add --dev @graphprotocol/graph-ts # Yarn @@ -29,33 +29,33 @@ npm install --save-dev @graphprotocol/graph-ts # NPM ## API Reference -توفر مكتبة `graphprotocol / graph-ts@` الـ APIs التالية: +The `@graphprotocol/graph-ts` library provides the following APIs: -- واجهة برمجة تطبيقات `ethereum` للعمل مع عقود Ethereum الذكية والأحداث والكتل والإجراات وقيم Ethereum. -- واجهة برمجة تطبيقات `store` لتحميل الـ entities وحفظها من وإلى مخزن Graph Node. -- واجهة برمجة تطبيقات ` log` لتسجيل الرسائل إلى خرج Graph Node ومستكشف Graph Explorer. -- واجهة برمجة تطبيقات `ipfs` لتحميل الملفات من IPFS. -- واجهة برمجة تطبيقات `json` لتحليل بيانات JSON. -- واجهة برمجة تطبيقات ` crypto` لاستخدام وظائف التشفير. +- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. +- A `store` API to load and save entities from and to the Graph Node store. +- A `log` API to log messages to the Graph Node output and the Graph Explorer. +- An `ipfs` API to load files from IPFS. +- A `json` API to parse JSON data. +- A `crypto` API to use cryptographic functions. - Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. -### إصدارات +### Versions -الـ `apiVersion` في الـ subgraph manifest تحدد إصدار الـ mapping API الذي يتم تشغيله بواسطة Graph Node للـ subgraph المحدد. الاصدار الحالي لـ mapping API هو 0.0.6. +The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| الاصدار | ملاحظات الإصدار | -| :-: | --- | -| 0.0.6 | تمت إضافة حقل `nonce` إلى كائن إجراء الـ Ethereum
تمت إضافة `baseFeePerGas` إلى كائن Ethereum Block | -| 0.0.5 | تمت ترقية AssemblyScript إلى الإصدار 0.19.10 (يرجى الاطلاع على [ `دليل الترحيل` ](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` أعيد تسميته إلى `ethereum.transaction.gasLimit` | -| 0.0.4 | تمت إضافة حقل `functionSignature` إلى كائن Ethereum SmartContractCall | -| 0.0.3 | تمت إضافةحقل `from` إلى كائن Ethereum Call
`etherem.call.address` تمت إعادة تسميته إلى `ethereum.call.to` | -| 0.0.2 | تمت إضافة حقل ` input` إلى كائن إجراء Ethereum | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | -### الأنواع المضمنة Built-in +### Built-in Types -يمكن العثور على الوثائق الخاصة بالأنواع الأساسية المضمنة في AssemblyScript في [ AssemblyScript wiki ](https://github.com/AssemblyScript/assemblyscript/wiki/Types). +Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). -يتم توفير الأنواع الإضافية التالية بواسطة `graphprotocol/graph-ts@`. +The following additional types are provided by `@graphprotocol/graph-ts`. #### ByteArray @@ -63,24 +63,24 @@ npm install --save-dev @graphprotocol/graph-ts # NPM import { ByteArray } from '@graphprotocol/graph-ts' ``` -تمثل `ByteArray` مصفوفة `u8`. +`ByteArray` represents an array of `u8`. _Construction_ - `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. البادئة بـ `0x` اختيارية. +- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. _Type conversions_ -- `toHexString (): string` - تحول إلى سلسلة سداسية عشرية مسبوقة بـ `0x`. -- `toString (): string` - تترجم البايت كسلسلة UTF-8. -- `toBase58 (): string` - ترميز البايت لسلسلة base58. -- `toU32 (): u32` - يترجم البايت كـ `u32` little-endian. Throws in case of overflow. +- `toHexString(): string` - Converts to a hex string prefixed with `0x`. +- `toString(): string` - Interprets the bytes as a UTF-8 string. +- `toBase58(): string` - Encodes the bytes into a base58 string. +- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. - `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. _Operators_ -- `equals(y: ByteArray): bool` – يمكن كتابتها كـ `x == y`. +- `equals(y: ByteArray): bool` – can be written as `x == y`. #### BigDecimal @@ -88,30 +88,30 @@ _Operators_ import { BigDecimal } from '@graphprotocol/graph-ts' ``` -يستخدم `BigDecimal` للتعبير عن الكسور العشرية. +`BigDecimal` is used to represent arbitrary precision decimals. _Construction_ -- `constructor(bigInt: BigInt)` – يُنشئ `BigDecimal` من `BigInt`. -- `static fromString(s: string): BigDecimal` – يحلل من سلسلة عشرية. +- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. +- `static fromString(s: string): BigDecimal` – parses from a decimal string. _Type conversions_ -- `toString(): string` – يطبع سلسلة عشرية. +- `toString(): string` – prints to a decimal string. _Math_ -- `plus(y: BigDecimal): BigDecimal` – يمكن كتابتها كـ `x + y`. -- `minus(y: BigDecimal): BigDecimal` – يمكن كتابتها كـ `x - y`. -- `times(y: BigDecimal): BigDecimal` – يمكن كتابتها كـ `x * y`. -- `div(y: BigDecimal): BigDecimal` – يمكن كتابتها كـ`x / y`. -- `equals(y: BigDecimal): bool` – يمكن كتابتها كـ `x == y`. -- `notEqual(y: BigDecimal): bool` –يمكن كتابتها كـ `x != y`. -- `lt(y: BigDecimal): bool` – يمكن كتابتها كـ `x < y`. -- `le(y: BigDecimal): bool` – يمكن كتابتها كـ `x <= y`. -- `gt(y: BigDecimal): bool` – يمكن كتابتها كـ `x > y`. -- `ge(y: BigDecimal): bool` – يمكن كتابتها كـ `x >= y`. -- `neg(): BigDecimal` - يمكن كتابتها كـ `-x`. +- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. +- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. +- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. +- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. +- `equals(y: BigDecimal): bool` – can be written as `x == y`. +- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. +- `lt(y: BigDecimal): bool` – can be written as `x < y`. +- `le(y: BigDecimal): bool` – can be written as `x <= y`. +- `gt(y: BigDecimal): bool` – can be written as `x > y`. +- `ge(y: BigDecimal): bool` – can be written as `x >= y`. +- `neg(): BigDecimal` - can be written as `-x`. #### BigInt @@ -119,48 +119,47 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -يستخدم `BigInt` لتمثيل أعداد صحيحة كبيرة. يتضمن ذلك قيم Ethereum من النوع `uint32` إلى `uint256` و `int64` إلى `int256`. كل شيء أدناه `uint32` ، مثل `int32` أو `uint24` أو `int8` يتم تمثيله كـ `i32`. +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. -تحتوي فئة `BigInt` على API التالية: +The `BigInt` class has the following API: _Construction_ -- `BigInt.fromI32 (x: i32): BigInt` - ينشئ `BigInt` من `i32`. -- `BigInt.fromString(s: string): BigInt`– يحلل `BigInt` من سلسلة(string). -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – يترجم `bytes` باعتباره عددا صحيحا little-endian بدون إشارة. إذا كان الإدخال الخاص بك big-endian، فقم باستدعاء `.()reverse ` أولا. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – يترجم `bytes` باعتباره عددا صحيحا little-endian له إشارة. إذا كان الإدخال الخاص بك big-endian، فاستدعي `.()reverse ` أولا. +- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. _Type conversions_ -- `x.toHex(): string` – ترجع `BigInt` إلى سلسلة سداسية العشرية. - -- `x.toString (): string` - يحول `BigInt` إلى سلسلة رقم عشري. -- `x.toI32 (): i32` - ترجع `BigInt` كـ `i32` ؛ تفشل إذا كانت القيمة لا تتناسب مع `i32`. إنها لفكرة جيدة أن تتحقق أولا من `()x.isI32`. -- `x.toBigDecimal (): BigDecimal` - يحول إلى رقم عشري بدون جزء كسري. +- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. +- `x.toString(): string` – turns `BigInt` into a decimal number string. +- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. +- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. _Math_ -- `x.plus(y: BigInt): BigInt` – يمكن كتابتها كـ `x + y`. -- `x.minus(y: BigInt): BigInt` – يمكن كتابتها كـ `x - y`. -- `x.times(y: BigInt): BigInt` – يمكن كتابتها كـ `x * y`. -- `x.div(y: BigInt): BigInt` – يمكن كتابتها كـ `x / y`. -- `x.mod(y: BigInt): BigInt` – يمكن كتابتها كـ `x % y`. -- `x.equals(y: BigInt): bool` – يمكن كتابتها كـ `x == y`. -- `x.notEqual(y: BigInt): bool` –يمكن كتابتها كـ `x != y`. -- `x.lt(y: BigInt): bool` – يمكن كتابتها كـ `x < y`. -- `x.le(y: BigInt): bool` – يمكن كتابتها كـ `x <= y`. -- `x.gt(y: BigInt): bool` – يمكن كتابتها كـ `x > y`. -- `x.ge(y: BigInt): bool` – يمكن كتابتها كـ `x >= y`. -- `x.neg(): BigInt` – يمكن كتابتها كـ `-x`. -- `x.divDecimal (y: BigDecimal): BigDecimal` - يتم القسمة على عدد عشري ، مما يعطي نتيجة عشرية. -- `x.isZero(): bool` – ملائم للتحقق مما إذا كان الرقم صفرا. -- `x.isI32(): bool` – يتحقق مما إذا كان الرقم يناسب `i32`. -- `x.abs(): BigInt` – قيمة مطلقة. -- `x.pow(exp: u8): BigInt` – أس. -- `bitOr(x: BigInt, y: BigInt): BigInt` – يمكن كتابتها كـ `x | y`. -- `bitAnd(x: BigInt, y: BigInt): BigInt` – يمكن كتابتها كـ `x & y`. -- `leftShift(x: BigInt, bits: u8): BigInt` –يمكن كتابتها كـ `x << y`. -- `rightShift(x: BigInt, bits: u8): BigInt` – يمكن كتابتها كـ `x >> y`. +- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. +- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. +- `x.times(y: BigInt): BigInt` – can be written as `x * y`. +- `x.div(y: BigInt): BigInt` – can be written as `x / y`. +- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. +- `x.equals(y: BigInt): bool` – can be written as `x == y`. +- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. +- `x.lt(y: BigInt): bool` – can be written as `x < y`. +- `x.le(y: BigInt): bool` – can be written as `x <= y`. +- `x.gt(y: BigInt): bool` – can be written as `x > y`. +- `x.ge(y: BigInt): bool` – can be written as `x >= y`. +- `x.neg(): BigInt` – can be written as `-x`. +- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. +- `x.isZero(): bool` – Convenience for checking if the number is zero. +- `x.isI32(): bool` – Check if the number fits in an `i32`. +- `x.abs(): BigInt` – Absolute value. +- `x.pow(exp: u8): BigInt` – Exponentiation. +- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. +- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. +- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. #### TypedMap @@ -168,15 +167,15 @@ _Math_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -يمكن استخدام `TypedMap` لتخزين أزواج key-value. انظر [هذا المثال ](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). -تحتوي فئة `TypedMap` على API التالية: +The `TypedMap` class has the following API: - `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` -- `map.set (key: K، value: V): void` - يضبط قيمة الـ `key` لـ `value` +- `map.set(key: K, value: V): void` – sets the value of `key` to `value` - `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map -- `map.get(key: K): V | null` – يرجع قيمة ` key` أو `null` إذا كان المفتاح ` ` غير موجود في الخريطة -- `map.isSet(key: K): bool` – يرجع `true` إذا كان الـ `key` موجودا في الخريطة و `false` إذا كان غير موجود +- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map +- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not #### Bytes @@ -184,13 +183,13 @@ import { TypedMap } from '@graphprotocol/graph-ts' import { Bytes } from '@graphprotocol/graph-ts' ``` -يتم استخدام ` Bytes` لتمثيل مصفوفات طول عشوائية من البايتات. يتضمن ذلك قيم إيثريوم من النوع ` bytes` و ` bytes32` وما إلى ذلك. +`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. -فئة `Bytes` ترث من [ Uint8Array ](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) و لذا فهو يدعم جميع وظائف `Uint8Array` ، بالإضافة إلى الـ methods الجديدة التالية: +The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: -- `b.toHex()` – ترع سلسلة سداسية عشرية تمثل الـ bytes في المصفوفة -- `b.toString()` – يحول الـ bytes في المصفوفة إلى سلسلة من unicode -- `b.toBase58()` – يحول قيمة Ethereum Bytes إلى ترميز base58 (يستخدم لـ IPFS hashes) +- `b.toHex()` – returns a hexadecimal string representing the bytes in the array +- `b.toString()` – converts the bytes in the array to a string of unicode characters +- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) #### Address @@ -198,11 +197,11 @@ import { Bytes } from '@graphprotocol/graph-ts' import { Address } from '@graphprotocol/graph-ts' ``` -` Address` امتداد لـ` Bytes` لتمثيل قيم Ethereum ` address`. +`Address` extends `Bytes` to represent Ethereum `address` values. -إنها تضيف الـ method التالية أعلىAPI الـ `Bytes`: +It adds the following method on top of the `Bytes` API: -- `Address.fromString(s: string): Address` – ينشئ `Address` من سلسلة سداسية عشرية +- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string ### Store API @@ -210,13 +209,13 @@ import { Address } from '@graphprotocol/graph-ts' import { store } from '@graphprotocol/graph-ts' ``` -تسمح واجهة برمجة التطبيقات `store` بتحميل وحفظ وإزالة الكيانات من وإلى مخزن Graph Node. +The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. لتسهيل العمل مع هذه الكيانات ، فالأمر `graph codegen` المقدم بواسطة [ Graph CLI ](https://github.com/graphprotocol/graph-cli) ينشئ فئات الكيان ، وهي فئات فرعية من النوع المضمن ` Entity` ، مع خصائص getters و setters للحقول في المخطط بالإضافة إلى methods لتحميل هذه الكيانات وحفظها. +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. -#### إنشاء الكيانات +#### Creating entities -ما يلي هو نمط شائع لإنشاء كيانات من أحداث Ethereum. +The following is a common pattern for creating entities from Ethereum events. ```typescript // Import the Transfer event class generated from the ERC20 ABI @@ -242,13 +241,13 @@ export function handleTransfer(event: TransferEvent): void { } ``` -عند مواجهة حدث ` Transfer` أثناء معالجة السلسلة ، يتم تمريره إلى معالج الحدث `handleTransfer` باستخدام نوع ` Transfer` المولدة (الاسم المستعار هنا لـ `TransferEvent` لتجنب تعارض التسمية مع نوع الكيان). يسمح هذا النوع بالوصول إلى البيانات مثل الإجراء الأصلي للحدث وبارامتراته. +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -يجب أن يكون لكل كيان ID فريد لتجنب التضارب مع الكيانات الأخرى. من الشائع إلى حد ما أن تتضمن بارامترات الأحداث معرفا فريدا يمكن استخدامه. ملاحظة: استخدام hash الـ الإجراء كـ ID يفترض أنه لا توجد أحداث أخرى في نفس الإجراء تؤدي إلى إنشاء كيانات بهذا الـ hash كـ ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. -#### تحميل الكيانات من المخزن +#### Loading entities from the store -إذا كان الكيان موجودا بالفعل ، فيمكن تحميله من المخزن بالتالي: +If an entity already exists, it can be loaded from the store with the following: ```typescript let id = event.transaction.hash.toHex() // or however the ID is constructed @@ -260,18 +259,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -نظرا لأن الكيان قد لا يكون موجودا في المتجر ، فإن method `load` تُرجع قيمة من النوع ` Transfer | null`. وبالتالي قد يكون من الضروري التحقق من حالة `null` قبل استخدام القيمة. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. -> ** ملاحظة: ** تحميل الكيانات ضروري فقط إذا كانت التغييرات التي تم إجراؤها في الـ mapping تعتمد على البيانات السابقة للكيان. انظر القسم التالي للتعرف على الطريقتين لتحديث الكيانات الموجودة. +> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. -#### تحديث الكيانات الموجودة +#### Updating existing entities -هناك طريقتان لتحديث كيان موجود: +There are two ways to update an existing entity: -1. حمل الكيان بـ `Transfer.load (id)` على سبيل المثال، قم بتعيين الخصائص على الكيان ، ثم `()save.` للمخزن. -2. ببساطة أنشئ الكيان بـ ` new Transfer(id)` على سبيل المثال، قم بتعيين الخصائص على الكيان ، ثم `()save.` للمخزن. إذا كان الكيان موجودا بالفعل ، يتم دمج التغييرات فيه. +1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. -يتم تغيير الخصائص بشكل مباشر في معظم الحالات ، وذلك بفضل خاصية الـ setters التي تم إنشاؤها: +Changing properties is straight forward in most cases, thanks to the generated property setters: ```typescript let transfer = new Transfer(id) @@ -280,16 +279,16 @@ transfer.to = ... transfer.amount = ... ``` -من الممكن أيضا إلغاء الخصائص بإحدى التعليمات التالية: +It is also possible to unset properties with one of the following two instructions: ```typescript transfer.from.unset() transfer.from = null ``` -يعمل هذا فقط مع الخصائص الاختيارية ، أي الخصائص التي تم التصريح عنها بدون `! ` في GraphQL. كمثالان `owner: Bytes` أو `amount: BigInt`. +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. -يعد تحديث خصائص المصفوفة أكثر تعقيدا ، حيث يؤدي الحصول على مصفوفة من كيان إلى إنشاء نسخة من تلك المصفوفة. هذا يعني أنه يجب تعيين خصائص المصفوفة مرة أخرى بشكل صريح بعد تغيير المصفوفة. التالي يفترض ` entity` به حقل `أرقام: [BigInt!]!`. +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. ```typescript // This won't work @@ -303,28 +302,28 @@ entity.numbers = numbers entity.save() ``` -#### إزالة الكيانات من المخزن +#### Removing entities from the store -لا توجد حاليا طريقة لإزالة كيان عبر الأنواع التي تم إنشاؤها. بدلاً من ذلك ، تتطلب إزالة الكيان تمرير اسم نوع الكيان و ID الكيان إلى `store.remove`: +There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' ... -()let id = event.transaction.hash.toHex +let id = event.transaction.hash.toHex() store.remove('Transfer', id) ``` ### Ethereum API -يوفر Ethereum API الوصول إلى العقود الذكية ومتغيرات الحالة العامة ووظائف العقد والأحداث والإجراءات والكتل وتشفير / فك تشفير بيانات Ethereum. +The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. -#### دعم أنواع الإيثيريوم +#### Support for Ethereum Types -كما هو الحال مع الكيانات ، `graph codegen` ينشئ فئات لجميع العقود الذكية والأحداث المستخدمة في الـ subgraph. لهذا ، يجب أن يكون ABI العقد جزءا من مصدر البيانات في subgraph manifest. عادة ما يتم تخزين ملفات ABI في مجلد `/abis`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -باستخدام الفئات التي تم إنشاؤها ، تحدث التحويلات بين أنواع Ethereum و [ الأنواع المضمنة ](#built-in-types) خلف الكواليس بحيث لا يضطر منشؤوا الـ subgraph إلى القلق بشأنها. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. -يوضح المثال التالي هذا. مخطط subgraph معطى مثل +The following example illustrates this. Given a subgraph schema like ```graphql type Transfer @entity { @@ -334,7 +333,7 @@ type Transfer @entity { } ``` -و توقيع الحدث `Transfer(address,address,uint256)` على Ethereum ، قيم ` from` ، ` to` و `amount` من النوع `address` و `address` و `uint256` يتم تحويلها إلى `Address` و `BigInt` ، مما يسمح بتمريرها إلى خصائص `!Bytes ` و `!BigInt ` للكيان `Transfer`: +and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: ```typescript let id = event.transaction.hash.toHex() @@ -345,9 +344,9 @@ transfer.amount = event.params.amount transfer.save() ``` -#### الأحداث وبيانات الكتلة/ الإجراء +#### Events and Block/Transaction Data -أحداث Ethereum التي تم تمريرها إلى معالجات الأحداث ، مثل حدث `Transfer` في الأمثلة السابقة ، لا توفر فقط الوصول إلى بارامترات الحدث ولكن أيضا إلى الإجراء الأصلي والكتلة التي تشكل جزءا منها. يمكن الحصول على البيانات التالية من `event` instances (هذه الفئات هي جزء من وحدة الـ `ethereum` في `graph-ts`): +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): ```typescript class Event { @@ -391,11 +390,11 @@ class Transaction { } ``` -#### الوصول إلى حالة العقد الذكي Smart Contract +#### Access to Smart Contract State -يشتمل الكود أيضا الذي تم إنشاؤه بواسطة `graph codegen` على فئات للعقود الذكية المستخدمة في الـ subgraph. يمكن استخدامها للوصول إلى متغيرات الحالة العامة واستدعاء دوال العقد في الكتلة الحالية. +The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. -النمط الشائع هو الوصول إلى العقد الذي ينشأ منه الحدث. يتم تحقيق ذلك من خلال الكود التالي: +A common pattern is to access the contract from which an event originates. This is achieved with the following code: ```typescript // Import the generated contract class @@ -412,13 +411,13 @@ export function handleTransfer(event: Transfer) { } ``` -طالما أن `ERC20Contract` في الـ Ethereum له دالة عامة للقراءة فقط تسمى ` symbol` ، فيمكن استدعاؤها بـ `()symbol.`. بالنسبة لمتغيرات الحالة العامة ، يتم إنشاء method بنفس الاسم تلقائيا. +As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -أي عقد آخر يمثل جزءا من الـ subgraph يمكن استيراده من الكود الذي تم انشاؤه ويمكن ربطه بعنوان صالح. +Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls -إذا كان من الممكن التراجع عن methods القراءة فقط لعقدك ، فيجب عليك التعامل مع ذلك عن طريق استدعاء method العقد التي تم انشاؤها والمسبوقة بـ على سبيل المثال ، يكشف عقد Gravity عن method `gravatarToOwner`. سيكون هذا الكود قادرا على معالجة التراجع في ذلك الـ method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -430,11 +429,11 @@ if (callResult.reverted) { } ``` -لاحظ أن Graph node المتصلة بعميل Geth أو Infura قد لا تكتشف جميع المرتجعات ، إذا كنت تعتمد على ذلك ، فإننا نوصي باستخدام Graph node المتصلة بعميل Parity. +Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. #### Encoding/Decoding ABI -يمكن تشفير البيانات وفك تشفيرها وفقا لتنسيق تشفير ABI الـ Ethereum باستخدام دالتي `encode` و `decode` في الوحدة الـ `ethereum`. +Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -451,11 +450,11 @@ let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` -لمزيد من المعلومات: +For more information: - [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) -- تشفير/فك تشفير [Rust library/CLI](https://github.com/rust-ethereum/ethabi) -- [أمثلة معقدة](https://github.com/graphprotocol/graph-node/blob/6a7806cc465949ebb9e5b8269eeb763857797efc/tests/integration-tests/host-exports/src/mapping.ts#L72) أكثر. +- Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) +- More [complex example](https://github.com/graphprotocol/graph-node/blob/6a7806cc465949ebb9e5b8269eeb763857797efc/tests/integration-tests/host-exports/src/mapping.ts#L72). ### Logging API @@ -463,17 +462,17 @@ let decoded = ethereum.decode('(address,uint256)', encoded) import { log } from '@graphprotocol/graph-ts' ``` -تسمح واجهة برمجة التطبيقات `log` لـ subgraphs بتسجيل المعلومات إلى الخرج القياسي لـ Graph Node بالإضافة إلى Graph Explorer. يمكن تسجيل الرسائل باستخدام مستويات سجل مختلفة. بنية سلسلة التنسيق الأساسي يتم توفيرها لتكوين رسائل السجل من argument. +The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. -تتضمن واجهة برمجة التطبيقات `log` الدوال التالية: +The `log` API includes the following functions: -- `log.debug(fmt: string, args: Array): void` - تسجل رسالة debug. -- `log.info(fmt: string, args: Array): void` - تسجل رسالة اعلامية. -- `log.warning(fmt: string, args: Array): void` - تسجل تحذير. -- `log.error(fmt: string, args: Array): void` - تسجل رسالة خطأ. -- `log.critical(fmt: string, args: Array): void` – تسجل رسالة حرجة _و_ وتنهي الـ subgraph. +- `log.debug(fmt: string, args: Array): void` - logs a debug message. +- `log.info(fmt: string, args: Array): void` - logs an informational message. +- `log.warning(fmt: string, args: Array): void` - logs a warning. +- `log.error(fmt: string, args: Array): void` - logs an error message. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. -واجهة برمجة التطبيقات `log` تأخذ تنسيق string ومصفوفة من قيم string. ثم يستبدل placeholders بقيم string من المصفوفة. يتم استبدال placeholder `{}` الأول بالقيمة الأولى في المصفوفة ، ويتم استبدال placeholder `{}` الثاني بالقيمة الثانية وهكذا. +The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) @@ -483,7 +482,7 @@ log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue. ##### Logging a single value -في المثال أدناه ، يتم تمرير قيمة السلسلة "A" إلى مصفوفة لتصبح `['A']` قبل تسجيلها: +In the example below, the string value "A" is passed into an array to become`['A']` before being logged: ```typescript let myValue = 'A' @@ -496,7 +495,7 @@ export function handleSomeEvent(event: SomeEvent): void { ##### Logging a single entry from an existing array -في المثال أدناه ، يتم تسجيل القيمة الأولى فقط لـ argument المصفوفة، على الرغم من احتواء المصفوفة على ثلاث قيم. +In the example below, only the first value of the argument array is logged, despite the array containing three values. ```typescript let myArray = ['A', 'B', 'C'] @@ -509,7 +508,7 @@ export function handleSomeEvent(event: SomeEvent): void { #### Logging multiple entries from an existing array -يتطلب كل إدخال في arguments المصفوفة placeholder خاص به `{}` في سلسلة رسالة السجل. يحتوي المثال أدناه على ثلاثة placeholders `{}` في رسالة السجل. لهذا السبب ، يتم تسجيل جميع القيم الثلاث في `myArray`. +Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. ```typescript let myArray = ['A', 'B', 'C'] @@ -522,7 +521,7 @@ export function handleSomeEvent(event: SomeEvent): void { ##### Logging a specific entry from an existing array -لعرض قيمة محددة في المصفوفة ، يجب توفير القيمة المفهرسة. +To display a specific value in the array, the indexed value must be provided. ```typescript export function handleSomeEvent(event: SomeEvent): void { @@ -533,7 +532,7 @@ export function handleSomeEvent(event: SomeEvent): void { ##### Logging event information -يسجل المثال أدناه رقم الكتلة و hash الكتلة و hash الإجراء من حدث: +The example below logs the block number, block hash and transaction hash from an event: ```typescript import { log } from '@graphprotocol/graph-ts' @@ -553,9 +552,9 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -تقوم العقود الذكية أحيانا بإرساء ملفات IPFS على السلسلة. يسمح هذا للـ mappings بالحصول على IPFS hashes من العقد وقراءة الملفات المقابلة من IPFS. سيتم إرجاع بيانات الملف كـ ` Bytes` ، والتي تتطلب عادة مزيدا من المعالجة ، على سبيل المثال مع واجهة برمجة التطبيقات `json` الموثقة لاحقا في هذه الصفحة. +Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. -IPFS hash أو مسار معطى، تتم قراءة ملف من IPFS على النحو التالي: +Given an IPFS hash or path, reading a file from IPFS is done as follows: ```typescript // Put this inside an event handler in the mapping @@ -568,9 +567,9 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -** ملاحظة: ** `ipfs.cat` ليست إجبارية في الوقت الحالي. لهذا السبب ، من المفيد دائما التحقق من نتيجة `null`. إذا تعذر استرداد الملف عبر شبكة Ipfs قبل انتهاء مهلة الطلب ، فسيعود `null`. لضمان إمكانية استرداد الملفات ، يجب تثبيتها في IPFS node التي تتصل بها Graph Node. على [الخدمة المستضافة ](https://thegraph.com/hosted-service) ، هذا هو [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). راجع قسم [تثبيت IPFS](/developer/create-subgraph-hosted#ipfs-pinning) لمزيد من المعلومات. +**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. -من الممكن أيضا معالجة الملفات الأكبر حجما بطريقة متدفقة باستخدام `ipfs.map`. تتوقع الدالة الـ hash أو مسارا لملف IPFS واسم الـ callback والـ flags لتعديل سلوكه: +It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -600,9 +599,9 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -الـ flag الوحيد المدعوم حاليا هو `json` ، والذي يجب تمريره إلى `ipfs.map`. باستخدام flag الـ `json` ، يجب أن يتكون ملف IPFS من سلسلة من قيم JSON ، قيمة واحدة لكل سطر. سيؤدي استدعاء `ipfs.map` إلى قراءة كل سطر في الملف ، وإلغاء تسلسله إلى `JSONValue` واستدعاء الـ callback لكل منها. يمكن لـ callback بعد ذلك استخدام عمليات الكيان لتخزين البيانات من `JSONValue`. يتم تخزين تغييرات الكيان فقط عندما ينتهي المعالج الذي يسمى `ipfs.map` بنجاح ؛ في غضون ذلك ، يتم الاحتفاظ بها في الذاكرة ، وبالتالي يكون حجم الملف الذي يمكن لـ `ipfs.map` معالجته يكون محدودا. +The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -عند النجاح ، يرجع `ipfs.map` ` بـ void`. إذا تسبب أي استدعاء لـ callback في حدوث خطأ ، فسيتم إحباط المعالج الذي استدعى `ipfs.map` ، ويتم وضع علامة على الـ subgraph على أنه فشل. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. ### Crypto API @@ -610,7 +609,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) import { crypto } from '@graphprotocol/graph-ts' ``` -توفر واجهة برمجة تطبيقات ` crypto` دوال التشفير للاستخدام في mappings. الآن ، يوجد واحد فقط: +The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: - `crypto.keccak256(input: ByteArray): ByteArray` @@ -620,14 +619,14 @@ import { crypto } from '@graphprotocol/graph-ts' import { json, JSONValueKind } from '@graphprotocol/graph-ts' ``` -يمكن تحليل بيانات JSON باستخدام `json` API: +JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – يحول بيانات JSON من مصفوفة `Bytes` -- `json.try_fromBytes(data: Bytes): Result` – إصدار آمن من `json.fromBytes` ، يقوم بإرجاع متغير خطأ إذا فشل التحليل -- `json.fromString(data: Bytes): JSONValue` – يحلل بيانات JSON من UTF-8 `String` صالح -- `json.try_fromString(data: Bytes): Result` – اصدار آمن من `json.fromString`, يقوم بإرجاع متغير خطأ إذا فشل التحليل +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence +- `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed +- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -توفر فئة `JSONValue` طريقة لسحب القيم من مستند JSON عشوائي. نظرا لأن قيم JSON يمكن أن تكون منطقية وأرقاما ومصفوفات وغيرها، فإن `JSONValue` يأتي مع خاصية `kind` للتحقق من نوع القيمة: +The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: ```typescript let value = json.fromBytes(...) @@ -636,22 +635,22 @@ if (value.kind == JSONValueKind.BOOL) { } ``` -بالإضافة إلى ذلك ، هناك method للتحقق مما إذا كانت القيمة ` null`: +In addition, there is a method to check if the value is `null`: - `value.isNull(): boolean` -عندما يكون نوع القيمة مؤكدا ، يمكن تحويلها إلى [ نوع مضمن ](#built-in-types) باستخدام إحدى الـ methods التالية: +When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: - `value.toBool(): boolean` - `value.toI64(): i64` - `value.toF64(): f64` - `value.toBigInt(): BigInt` - `value.toString(): string` -- `value.toArray(): Array` - (ثم قم بتحويل `JSONValue` بإحدى الـ methods الخمس المذكورة أعلاه) +- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) ### Type Conversions Reference -| المصدر(المصادر) | الغاية | دالة التحويل | +| Source(s) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | | Address | Bytes | none | | Address | ID | s.toHexString() | @@ -691,7 +690,7 @@ if (value.kind == JSONValueKind.BOOL) { ### Data Source Metadata -يمكنك فحص عنوان العقد والشبكة وسياق مصدر البيانات الذي استدعى المعالج من خلال `dataSource` namespace: +You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: - `dataSource.address(): Address` - `dataSource.network(): string` @@ -699,7 +698,7 @@ if (value.kind == JSONValueKind.BOOL) { ### Entity and DataSourceContext -تحتوي فئة `Entity` الأساسية والفئة الفرعية `DataSourceContext` على مساعدين لتعيين الحقول والحصول عليها ديناميكيا: +The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From d47442e722836e7fb5b4e4ba3a85658f77f61e3c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:41 -0500 Subject: [PATCH 023/241] New translations create-subgraph-hosted.mdx (Spanish) --- pages/es/developer/create-subgraph-hosted.mdx | 418 +++++++++--------- 1 file changed, 209 insertions(+), 209 deletions(-) diff --git a/pages/es/developer/create-subgraph-hosted.mdx b/pages/es/developer/create-subgraph-hosted.mdx index 76a1af304e61..d31a88ea52a4 100644 --- a/pages/es/developer/create-subgraph-hosted.mdx +++ b/pages/es/developer/create-subgraph-hosted.mdx @@ -1,18 +1,18 @@ --- -title: Crear un Subgrafo +title: Create a Subgraph --- -Antes de poder utilizar el Graph CLI, tienes que crear tu subgrafo en [Subgraph Studio](https://thegraph.com/studio). A continuación, podrás configurar tu proyecto de subgrafo y desplegarlo en la plataforma que elijas. Ten en cuenta que **los subgrafos que no indexen Ethereum mainnet no se publicarán en The Graph Network**. +Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. -El comando `graph init` se puede utilizar para configurar un nuevo proyecto de subgrafo, ya sea desde un contrato existente en cualquiera de las redes públicas de Ethereum, o desde un subgrafo de ejemplo. Este comando se puede utilizar para crear un subgrafo en el Subgraph Studio pasando `graph init --product subgraph-studio`. Si ya tienes un contrato inteligente desplegado en la red principal de Ethereum o en una de las redes de prueba, arrancar un nuevo subgrafo a partir de ese contrato puede ser una buena manera de empezar. Pero primero, un poco sobre las redes que admite The Graph. +The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. -## Redes Que Admite +## Redes admitidas -The Graph Network admite subgrafos que indexan la red principal de Ethereum: +The Graph Network supports subgraphs indexing mainnet Ethereum: - `mainnet` -**El Servicio Alojado (Hosted Service) admite Redes Adicionales en la versión beta**: +**Additional Networks are supported in beta on the Hosted Service**: - `mainnet` - `kovan` @@ -44,13 +44,13 @@ The Graph Network admite subgrafos que indexan la red principal de Ethereum: - `aurora` - `aurora-testnet` -El Hosted Service (servicio alojado) de The Graph se basa en la estabilidad y la fiabilidad de las tecnologías subyacentes, es decir, los endpoints JSON RPC proporcionados. Las redes más nuevas se marcarán como beta hasta que la red haya demostrado su estabilidad, fiabilidad y escalabilidad. Durante este período beta, existe el riesgo de que se produzcan tiempos de inactividad y comportamientos inesperados. +The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. -Recuerda que **no podrás** publicar un subgrafo que indexe una red no-mainnet a la Graph Network descentralizada en [Subgraph Studio](/studio/subgraph-studio). +Remember that you will **not be able** to publish a subgraph that indexes a non-mainnet network to the decentralized Graph Network in [Subgraph Studio](/studio/subgraph-studio). -## Desde un Contrato Existente +## From An Existing Contract -El siguiente comando crea un subgrafo que indexa todos los eventos de un contrato existente. Intenta obtener la ABI del contrato desde Etherscan y vuelve a solicitar una ruta de archivo local. Si falta alguno de los argumentos opcionales, te lleva a través de un formulario interactivo. +The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. ```sh graph init \ @@ -61,23 +61,23 @@ graph init \ [] ``` -El `` es el ID de tu subgrafo en Subgraph Studio, y se puede encontrar en la página de detalles de tu subgrafo. +The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. -## Desde un Subgrafo de Ejemplo +## From An Example Subgraph -El segundo modo que admite `graph init` es la creación de un nuevo proyecto a partir de un subgrafo de ejemplo. El siguiente comando lo hace: +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: ``` graph init --studio ``` -El subgrafo de ejemplo se basa en el contrato Gravity de Dani Grant que gestiona los avatares de los usuarios y emite `NewGravatar` o `UpdateGravatar` cada vez que se crean o actualizan los avatares. El subgrafo maneja estos eventos escribiendo entidades `Gravatar` en el almacén de Graph Node y asegurándose de que éstas se actualicen según los eventos. Las siguientes secciones repasarán los archivos que componen el manifiesto del subgrafo para este ejemplo. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. -## El Manifiesto de Subgrafo +## The Subgraph Manifest -El manifiesto del subgrafo `subgraph.yaml` define los contratos inteligentes que indexa tu subgrafo, a qué eventos de estos contratos prestar atención, y cómo mapear los datos de los eventos a las entidades que Graph Node almacena y permite consultar. La especificación completa de los manifiestos de subgrafos puede encontrarse en [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -Para este subgrafo de ejemplo, `subgraph.yaml` es: +For the example subgraph, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -118,59 +118,59 @@ dataSources: file: ./src/mapping.ts ``` -Las entradas importantes a actualizar para el manifiesto son: +The important entries to update for the manifest are: -- `description`: una descripción legible para el ser humano de lo que es el subgrafo. Esta descripción es mostrada por The Graph Explorer cuando el subgrafo se despliega en el Servicio Alojado. +- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. -- `repository`: la URL del repositorio donde se encuentra el manifiesto del subgrafo. Esto también lo muestra The Graph Explorer. +- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. -- `features`: una lista de todos los nombres de las [feature](#experimental-features) usadas. +- `features`: a list of all used [feature](#experimental-features) names. -- `dataSources.source`: la address del contrato inteligente, las fuentes del subgrafo, y el abi del contrato inteligente a utilizar. La address es opcional; omitirla permite indexar los eventos coincidentes de todos los contratos. +- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. -- `dataSources.source.startBlock`: el número opcional del bloque desde el que la fuente de datos comienza a indexar. En la mayoría de los casos, sugerimos utilizar el bloque en el que se creó el contrato. +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. -- `dataSources.mapping.entities`: las entidades que la fuente de datos escribe en el almacén. El esquema de cada entidad se define en el archivo schema.graphql. +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. -- `dataSources.mapping.abis`: uno o más archivos ABI con nombre para el contrato fuente, así como cualquier otro contrato inteligente con el que interactúes desde los mapeos. +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: enumera los eventos de contratos inteligentes a los que reacciona este subgrafo y los handlers en el mapeo -./src/mapping.ts en el ejemplo- que transforman estos eventos en entidades en el almacén. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: enumera las funciones de contrato inteligente a las que reacciona este subgrafo y los handlers en el mapeo que transforman las entradas y salidas a las llamadas de función en entidades en el almacén. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: enumera los bloques a los que reacciona este subgrafo y los handlers en el mapeo que se ejecutan cuando un bloque se agrega a la cadena. Sin un filtro, el handler de bloque se ejecutará en cada bloque. Se puede proporcionar un filtro opcional con los siguientes tipos: call`. Un filtro `call` ejecutará el handler si el bloque contiene al menos una llamada al contrato de la fuente de datos. +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. -Un único subgrafo puede indexar datos de múltiples contratos inteligentes. Añade una entrada por cada contrato del que haya que indexar datos a la array `dataSources`. +A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. -Los disparadores (triggers) de una fuente de datos dentro de un bloque se ordenan mediante el siguiente proceso: +The triggers for a data source within a block are ordered using the following process: -1. Los disparadores de eventos y llamadas se ordenan primero por el índice de la transacción dentro del bloque. -2. Los disparadores de eventos y llamadas dentro de la misma transacción se ordenan siguiendo una convención: primero los disparadores de eventos y luego los de llamadas, respetando cada tipo el orden en que se definen en el manifiesto. -3. Los disparadores de bloque se ejecutan después de los disparadores de eventos y llamadas, en el orden en que están definidos en el manifiesto. +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers with in the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. -Estas normas de orden están sujetas a cambios. +These ordering rules are subject to change. -### Obtención de ABIs +### Getting The ABIs -Los archivos ABI deben coincidir con tu(s) contrato(s). Hay varias formas de obtener archivos ABI: +The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: -- Si estás construyendo tu propio proyecto, es probable que tengas acceso a tus ABIs más actuales. -- Si estás construyendo un subgrafo para un proyecto público, puedes descargar ese proyecto en tu computadora y obtener la ABI utilizando [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or usando solc para compilar. -- También puedes encontrar la ABI en [Etherscan](https://etherscan.io/), pero no siempre es fiable, ya que la ABI que se sube allí puede estar desactualizada. Asegúrate de que tienes la ABI correcta, de lo contrario la ejecución de tu subgrafo fallará. +- If you are building your own project, you will likely have access to your most current ABIs. +- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or using solc to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. -## El Esquema GraphQL +## The GraphQL Schema -El esquema para tu subgrafo está en el archivo `schema.graphql`. Los esquemas de GraphQL se definen utilizando el lenguaje de definición de interfaces de GraphQL. Si nunca has escrito un esquema GraphQL, es recomendable que consultes este manual sobre el sistema de tipos GraphQL. La documentación de referencia para los esquemas de GraphQL se puede encontrar en la sección [GraphQL API](/developer/graphql-api). +The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. -## Definir Entidades +## Defining Entities -Antes de definir las entidades, es importante dar un paso atrás y pensar en cómo están estructurados y vinculados los datos. Todas las consultas se harán contra el modelo de datos definido en el esquema del subgrafo y las entidades indexadas por el subgrafo. Debido a esto, es bueno definir el esquema del subgrafo de una manera que coincida con las necesidades de tu dapp. Puede ser útil imaginar las entidades como "objetos que contienen datos", más que como eventos o funciones. +Before defining entities, it is important to take a step back and think about how your data is structured and linked. All queries will be made against the data model defined in the subgraph schema and the entities indexed by the subgraph. Because of this, it is good to define the subgraph schema in a way that matches the needs of your dapp. It may be useful to imagine entities as "objects containing data", rather than as events or functions. -Con The Graph, simplemente defines los tipos de entidad en `schema.graphql`, y Graph Node generará campos de nivel superior para consultar instancias individuales y colecciones de ese tipo de entidad. Cada tipo que deba ser una entidad debe ser anotado con una directiva `@entity`. +With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. -### Buen Ejemplo +### Good Example -La entidad `Gravatar` que aparece a continuación está estructurada en torno a un objeto Gravatar y es un buen ejemplo de cómo podría definirse una entidad. +The `Gravatar` entity below is structured around a Gravatar object and is a good example of how an entity could be defined. ```graphql type Gravatar @entity { @@ -182,9 +182,9 @@ type Gravatar @entity { } ``` -### Mal Ejemplo +### Bad Example -El ejemplo las entidades `GravatarAccepted` y `GravatarDeclined` que aparecen a continuación se basan en eventos. No se recomienda asignar eventos o llamadas a funciones a entidades 1:1. +The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. ```graphql type GravatarAccepted @entity { @@ -202,35 +202,35 @@ type GravatarDeclined @entity { } ``` -### Campos Opcionales y Obligatorios +### Optional and Required Fields -Los campos de la entidad pueden definirse como obligatorios u opcionales. Los campos obligatorios se indican con el `!` en el esquema. Si un campo obligatorio no está establecido en la asignación, recibirá este error al consultar el campo: +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: ``` Null value resolved for non-null field 'name' ``` -Cada entidad debe tener un campo `id`, que es de tipo `ID!` (string). El campo `id` sirve de clave primaria y debe ser único entre todas las entidades del mismo tipo. +Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. -### Tipos de Scalars incorporados +### Built-In Scalar Types -#### GraphQL admite Scalars +#### GraphQL Supported Scalars -Admitimos los siguientes scalars en nuestra API GraphQL: +We support the following scalars in our GraphQL API: -| Tipo | Descripción | -| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y addresses de Ethereum. | -| `ID` | Almacenado como un `string`. | -| `String` | Scalar para valores `string`. Los caracteres null no se admiten y se eliminan automáticamente. | -| `Boolean` | Scalar para valores `boolean`. | -| `Int` | The GraphQL spec define `Int` para tener un tamano de 32 bytes. | -| `BigInt` | Números enteros grandes. Usados para los tipos `uint32`, `int64`, `uint64`, ..., `uint256` de Ethereum. Nota: Todo debajo de `uint32`, como `int32`, `uint24` o `int8` es representado como `i32`. | -| `BigDecimal` | `BigDecimal` Decimales de alta precisión representados como un signo y un exponente. El rango de exponentes va de -6143 a +6144. Redondeado a 34 dígitos significativos. | +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `ID` | Stored as a `string`. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | #### Enums -También puedes crear enums dentro de un esquema. Los Enums tienen la siguiente sintaxis: +You can also create enums within a schema. Enums have the following syntax: ```graphql enum TokenStatus { @@ -240,19 +240,19 @@ enum TokenStatus { } ``` -Una vez definido el enum en el esquema, puedes utilizar la representación del string del valor del enum para establecer un campo enum en una entidad. Por ejemplo, puedes establecer el `tokenStatus` a `SecondOwner` definiendo primero tu entidad y posteriormente estableciendo el campo con `entity.tokenStatus = "SecondOwner`. El ejemplo siguiente muestra el aspecto de la entidad Token con un campo enum: +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: -Puedes encontrar más detalles sobre la escritura de enums en la [GraphQL documentation](https://graphql.org/learn/schema/). +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). -#### Relaciones entre Entidades +#### Entity Relationships -Una entidad puede tener una relación con una o más entidades de tu esquema. Estas relaciones pueden ser recorridas en tus consultas. Las relaciones en The Graph son unidireccionales. Es posible simular relaciones bidireccionales definiendo una relación unidireccional en cada "extremo" de la relación. +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. -Las relaciones se definen en las entidades como cualquier otro campo, salvo que el tipo especificado es el de otra entidad. +Relationships are defined on entities just like any other field except that the type specified is that of another entity. -#### Relaciones Uno a Uno +#### One-To-One Relationships -Define un tipo de entidad `Transaction` con una relación opcional de uno a uno con un tipo de entidad `TransactionReceipt`: +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: ```graphql type Transaction @entity { @@ -266,9 +266,9 @@ type TransactionReceipt @entity { } ``` -#### Relaciones Uno-a-Muchos +#### One-To-Many Relationships -Define un tipo de entidad `TokenBalance` con una relación requerida de uno a varios con un tipo de entidad Token: +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: ```graphql type Token @entity { @@ -282,15 +282,15 @@ type TokenBalance @entity { } ``` -#### Búsquedas Inversas +#### Reverse Lookups -Se pueden definir búsquedas inversas en una entidad a través del campo `@derivedFrom`. Esto crea un campo virtual en la entidad que puede ser consultado pero que no puede ser establecido manualmente a través de la API de mapeo. Más bien, se deriva de la relación definida en la otra entidad. Para este tipo de relaciones, rara vez tiene sentido almacenar ambos lados de la relación, y tanto la indexación como el rendimiento de la consulta serán mejores cuando sólo se almacene un lado y el otro se derive. +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -En el caso de las relaciones uno a muchos, la relación debe almacenarse siempre en el lado "uno", y el lado "muchos" debe derivarse siempre. Almacenar la relación de esta manera, en lugar de almacenar una array de entidades en el lado "muchos", resultará en un rendimiento dramáticamente mejor tanto para la indexación como para la consulta del subgrafo. En general, debe evitarse, en la medida de lo posible, el almacenamiento de arrays de entidades. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. -#### Ejemplo +#### Example -Podemos hacer que los balances de un token sean accesibles desde el token derivando un campo `tokenBalances`: +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: ```graphql type Token @entity { @@ -305,13 +305,13 @@ type TokenBalance @entity { } ``` -#### Relaciones de Muchos a Muchos +#### Many-To-Many Relationships -Para las relaciones de muchos a muchos, como los usuarios pueden pertenecer a cualquier número de organizaciones, la forma más directa, pero generalmente no la más eficaz, de modelar la relación es como un array en cada una de las dos entidades implicadas. Si la relación es simétrica, sólo es necesario almacenar un lado de la relación y el otro puede derivarse. +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. -#### Ejemplo +#### Example -Define una búsqueda inversa desde un tipo de entidad `User` a un tipo de entidad `Organization`. En el ejemplo siguiente, esto se consigue buscando el atributo `members` desde la entidad `Organization`. En las consultas, el campo `organizations` en `User` se resolverá buscando todas las entidades de `Organization` que incluyan el ID del usuario. +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. ```graphql type Organization @entity { @@ -327,7 +327,7 @@ type User @entity { } ``` -Una forma más eficaz de almacenar esta relación es a través de una tabla de asignación que tiene una entrada para cada par `User` / `Organization` con un esquema como +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like ```graphql type Organization @entity { @@ -349,7 +349,7 @@ type UserOrganization @entity { } ``` -Este enfoque requiere que las consultas desciendan a un nivel adicional para recuperar, por ejemplo, las organizaciones para los usuarios: +This approach requires that queries descend into one additional level to retrieve, for example, the organizations for users: ```graphql query usersWithOrganizations { @@ -364,11 +364,11 @@ query usersWithOrganizations { } ``` -Esta forma más elaborada de almacenar las relaciones de muchos a muchos se traducirá en menos datos almacenados para el subgrafo y, por tanto, en un subgrafo que suele ser mucho más rápido de indexar y consultar. +This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. -#### Agregar comentarios al esquema +#### Adding comments to the schema -Según la especificación GraphQL, se pueden añadir comentarios por encima de los atributos de entidad del esquema utilizando comillas dobles `""`. Esto se ilustra en el siguiente ejemplo: +As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -378,13 +378,13 @@ type MyFirstEntity @entity { } ``` -## Definición de Campos de Búsqueda de Texto Completo +## Defining Fulltext Search Fields -Las consultas de búsqueda de texto completo filtran y clasifican las entidades basándose en una entrada de búsqueda de texto. Las consultas de texto completo pueden devolver coincidencias de palabras similares procesando el texto de la consulta en stems antes de compararlo con los datos del texto indexado. +Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. -La definición de una consulta de texto completo incluye el nombre de la consulta, el diccionario lingüístico utilizado para procesar los campos de texto, el algoritmo de clasificación utilizado para ordenar los resultados y los campos incluidos en la búsqueda. Cada consulta de texto completo puede abarcar varios campos, pero todos los campos incluidos deben ser de un solo tipo de entidad. +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. -Para agregar una consulta de texto completo, incluye un tipo `_Schema_` con una directiva de texto completo en el esquema GraphQL. +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. ```graphql type _Schema_ @@ -407,7 +407,7 @@ type Band @entity { } ``` -El ejemplo campo `bandSearch` se puede utilizar en las consultas para filtrar las entidades `Band` con base en los documentos de texto en los campos `name`, `description`, y `bio`. Ve a [GraphQL API - Queries](/developer/graphql-api#queries) para ver una descripción de la API de búsqueda de texto completo y más ejemplos de uso. +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. ```graphql query { @@ -420,49 +420,49 @@ query { } ``` -> **[Feature Management](#experimental-features):** Desde `specVersion` `0.0.4` y en adelante, `fullTextSearch` se debe declarar bajo la sección `features` en el manifiesto del subgrafo. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. -### Idiomas admitidos +### Languages supported -La elección de un idioma diferente tendrá un efecto definitivo, aunque a veces sutil, en la API de búsqueda de texto completo. Los campos cubiertos por un campo de consulta de texto completo se examinan en el contexto de la lengua elegida, por lo que los lexemas producidos por las consultas de análisis y búsqueda varían de un idioma a otro. Por ejemplo: al utilizar el diccionario turco compatible, "token" se convierte en "toke", mientras que el diccionario inglés lo convierte en "token". +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". -Diccionarios de idiomas admitidos: +Supported language dictionaries: -| Código | Diccionario | -| ------ | ----------- | -| simple | General | -| da | Danés | -| nl | Holandés | -| en | Inglés | -| fi | Finlandés | -| fr | Francés | -| de | Alemán | -| hu | Húngaro | -| it | Italiano | -| no | Noruego | -| pt | Portugués | -| ro | Rumano | -| ru | Ruso | -| es | Español | -| sv | Sueco | -| tr | Turco | +| Code | Dictionary | +| ------ | ---------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portugese | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | -### Algoritmos de Clasificación +### Ranking Algorithms -Algoritmos admitidos para ordenar los resultados: +Supported algorithms for ordering results: -| Algoritmos | Descripción | -| ------------------- | -------------------------------------------------------------------------------------------------- | -| rango | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | -| rango de Proximidad | Similar al rango, pero también incluye la proximidad de los matches. | +| Algorithm | Description | +| ------------- | ----------------------------------------------------------------------- | +| rank | Use the match quality (0-1) of the fulltext query to order the results. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | -## Escribir Mapeos +## Writing Mappings -Los mapeos transforman los datos de Ethereum de los que se abastecen tus mapeos en entidades definidas en tu esquema. Los mapeos se escriben en un subconjunto de [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) llamado [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) que puede ser compilado a WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript es más estricto que el TypeScript normal, pero proporciona una sintaxis familiar. +The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. -Para cada handler de eventos que se define en `subgraph.yaml` bajo `mapping.eventHandlers`, crea una función exportada del mismo nombre. Cada handler debe aceptar un único parámetro llamado `event` con un tipo correspondiente al nombre del evento que se está manejando. +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -En el subgrafo de ejemplo, `src/mapping.ts` contiene handlers para los eventos `NewGravatar` y `UpdatedGravatar`: +In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -489,31 +489,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -El primer handler toma un evento `NewGravatar` y crea una nueva entidad `Gravatar` con `new Gravatar(event.params.id.toHex())`, poblando los campos de la entidad usando los parámetros correspondientes del evento. Esta instancia de entidad está representada por la variable `gravatar`, con un valor de id de `event.params.id.toHex()`. +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. -El segundo handler intenta cargar el `Gravatar` existente desde el almacén de The Graph Node. Si aún no existe, se crea bajo demanda. A continuación, la entidad se actualiza para que coincida con los nuevos parámetros del evento, antes de volver a guardarla en el almacén mediante `gravatar.save()`. +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. -### ID Recomendados para la Creación de Nuevas Entidades +### Recommended IDs for Creating New Entities -Cada entidad tiene que tener un `id` que sea único entre todas las entidades del mismo tipo. El valor del `id` de una entidad se establece cuando se crea la entidad. A continuación se recomiendan algunos valores de `id` a tener en cuenta a la hora de crear nuevas entidades. NOTA: El valor del `id` debe ser un `string`. +Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. - `event.params.id.toHex()` - `event.transaction.from.toHex()` - `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` -Proporcionamos la [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) que contiene utilidades para interactuar con el almacén Graph Node y comodidades para manejar datos y entidades de contratos inteligentes. Puedes utilizar esta biblioteca en tus mapeos importando `@graphprotocol/graph-ts` in `mapping.ts`. +We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. -## Generación de Código +## Code Generation -Para que trabajar con contratos inteligentes, eventos y entidades sea fácil y seguro desde el punto de vista de los tipos, Graph CLI puede generar tipos AssemblyScript a partir del esquema GraphQL del subgrafo y de las ABIs de los contratos incluidas en las fuentes de datos. +In order to make working smart contracts, events and entities easy and type-safe, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. -Esto se hace con +This is done with ```sh graph codegen [--output-dir ] [] ``` -pero en la mayoría de los casos, los subgrafos ya están preconfigurados a través de `package.json` para permitirte simplemente ejecutar uno de los siguientes para lograr lo mismo: +but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -523,7 +523,7 @@ yarn codegen npm run codegen ``` -Esto generará una clase AssemblyScript para cada contrato inteligente en los archivos ABI mencionados en `subgraph.yaml`, permitiéndote vincular estos contratos a direcciones específicas en los mapeos y llamar a métodos de contrato de sólo lectura contra el bloque que se está procesando. También generará una clase para cada evento del contrato para facilitar el acceso a los parámetros del evento, así como el bloque y la transacción que originó el evento. Todos estos tipos se escriben en `//.ts`. En el subgrafo de ejemplo, esto sería `generated/Gravity/Gravity.ts`, permitiendo a los mapeos importar estos tipos con +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with ```javascript import { @@ -535,23 +535,23 @@ import { } from '../generated/Gravity/Gravity' ``` -Además, se genera una clase para cada tipo de entidad en el esquema GraphQL del subgrafo. Estas clases proporcionan una carga de entidades segura, acceso de lectura y escritura a los campos de la entidad, así como un método `save()` para escribir entidades en el almacén. Todas las clases de entidades se escriben en `/schema.ts`, lo que permite que los mapeos los importen con +In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Nota:** La generación de código debe realizarse de nuevo después de cada cambio en el esquema GraphQL o en las ABIs incluidas en el manifiesto. También debe realizarse al menos una vez antes de construir o desplegar el subgrafo. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. -La generación de código no comprueba tu código de mapeo en `src/mapping.ts`. Si quieres comprobarlo antes de intentar desplegar tu subgrafo en the Graph Explorer, puedes ejecutar `yarn build` y corregir cualquier error de sintaxis que el compilador de TypeScript pueda encontrar. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. -## Plantillas de Fuentes de Datos +## Data Source Templates -Un patrón común en los contratos inteligentes de Ethereum es el uso de contratos de registro o fábrica, donde un contrato crea, gestiona o hace referencia a un número arbitrario de otros contratos que tienen cada uno su propio estado y eventos. Las direcciones de estos subcontratos pueden o no conocerse de antemano y muchos de estos contratos pueden crearse y/o añadirse con el tiempo. Por eso, en estos casos, es imposible definir una única fuente de datos o un número fijo de fuentes de datos y se necesita un enfoque más dinámico: _data source templates_. +A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. -### Fuente de Datos para el Contrato Principal +### Data Source for the Main Contract -En primer lugar, define una fuente de datos regular para el contrato principal. El siguiente fragmento muestra un ejemplo simplificado de fuente de datos para el contrato de fábrica de exchange [Uniswap](https://uniswap.io). Nota el handler `NewExchange(address,address)` del evento. Se emite cuando el contrato de fábrica crea un nuevo contrato de exchange en la cadena. +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. ```yaml dataSources: @@ -576,9 +576,9 @@ dataSources: handler: handleNewExchange ``` -### Plantillas de Fuentes de Datos para Contratos Creados Dinámicamente +### Data Source Templates for Dynamically Created Contracts -A continuación, añade _plantillas de origen de datos_ al manifiesto. Son idénticas a las fuentes de datos normales, salvo que carecen de una dirección de contrato predefinida en `source`. Normalmente, defines un modelo para cada tipo de subcontrato gestionado o referenciado por el contrato principal. +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. ```yaml dataSources: @@ -612,9 +612,9 @@ templates: handler: handleRemoveLiquidity ``` -### Instanciación de una Plantilla de Fuente de Datos +### Instantiating a Data Source Template -En el último paso, actualiza la asignación del contrato principal para crear una instancia de fuente de datos dinámica a partir de una de las plantillas. En este ejemplo, cambiarías el mapeo del contrato principal para importar la plantilla `Exchange` y llamaría al método `Exchange.create(address)` en él para empezar a indexar el nuevo contrato de exchange. +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. ```typescript import { Exchange } from '../generated/templates' @@ -626,13 +626,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> **Nota:** Un nuevo origen de datos sólo procesará las llamadas y los eventos del bloque en el que fue creado y todos los bloques siguientes, pero no procesará los datos históricos, es decir, los datos que están contenidos en bloques anteriores. +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. > -> Si los bloques anteriores contienen datos relevantes para la nueva fuente de datos, lo mejor es indexar esos datos leyendo el estado actual del contrato y creando entidades que representen ese estado en el momento de crear la nueva fuente de datos. +> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. -### Contexto de la Fuente de Datos +### Data Source Context -Los contextos de fuentes de datos permiten pasar una configuración extra al instanciar una plantilla. En nuestro ejemplo, digamos que los exchanges se asocian a un par de trading concreto, que se incluye en el evento `NewExchange`. Esa información se puede pasar a la fuente de datos instanciada, así: +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: ```typescript import { Exchange } from '../generated/templates' @@ -644,7 +644,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -Dentro de un mapeo de la plantilla `Exchange`, se puede acceder al contexto: +Inside a mapping of the `Exchange` template, the context can then be accessed: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -653,11 +653,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -Hay setters y getters como `setString` and `getString` para todos los tipos de valores. +There are setters and getters like `setString` and `getString` for all value types. -## Bloques de Inicio +## Start Blocks -El `startBlock` es un ajuste opcional que permite definir a partir de qué bloque de la cadena comenzará a indexar la fuente de datos. Establecer el bloque inicial permite a la fuente de datos omitir potencialmente millones de bloques que son irrelevantes. Normalmente, un desarrollador de subgrafos establecerá `startBlock` al bloque en el que se creó el contrato inteligente de la fuente de datos. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -683,23 +683,23 @@ dataSources: handler: handleNewEvent ``` -> **Nota:** El bloque de creación del contrato se puede buscar rápidamente en Etherscan: +> **Note:** The contract creation block can be quickly looked up on Etherscan: > -> 1. Busca el contrato introduciendo su dirección en la barra de búsqueda. -> 2. Haz clic en el hash de la transacción de creación en la sección `Contract Creator`. -> 3. Carga la página de detalles de la transacción, donde encontrarás el bloque inicial de ese contrato. +> 1. Search for the contract by entering its address in the search bar. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. Load the transaction details page where you'll find the start block for that contract. -## Handlers de Llamadas +## Call Handlers -Aunque los eventos proporcionan una forma eficaz de recoger los cambios relevantes en el estado de un contrato, muchos contratos evitan generar registros para optimizar los costos de gas. En estos casos, un subgrafo puede suscribirse a las llamadas realizadas al contrato de la fuente de datos. Esto se consigue definiendo los handlers de llamadas que hacen referencia a la firma de la función y al handler de mapeo que procesará las llamadas a esta función. Para procesar estas llamadas, el manejador de mapeo recibirá un `ethereum.Call` como argumento con las entradas y salidas tipificadas de la llamada. Las llamadas realizadas en cualquier profundidad de la cadena de llamadas de una transacción activarán el mapeo, permitiendo capturar la actividad con el contrato de origen de datos a través de los contratos proxy. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. -Los handlers de llamadas sólo se activarán en uno de estos dos casos: cuando la función especificada sea llamada por una cuenta distinta del propio contrato o cuando esté marcada como externa en Solidity y sea llamada como parte de otra función en el mismo contrato. +Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Nota:**Los handlers de llamadas no son compatibles con Rinkeby, Goerli o Ganache. Los handlers de llamadas dependen actualmente de la API de rastreo de Parity y estas redes no la admiten. +> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. -### Definición de un Handler de Llamadas +### Defining a Call Handler -Para definir un handler de llamadas en su manifiesto simplemente añade una array `callHandlers` bajo la fuente de datos a la que deseas suscribirte. +To define a call handler in your manifest simply add a `callHandlers` array under the data source you would like to subscribe to. ```yaml dataSources: @@ -724,11 +724,11 @@ dataSources: handler: handleCreateGravatar ``` -La `función` es la firma de la función normalizada por la que se filtran las llamadas. La propiedad `handler` es el nombre de la función de tu mapeo que quieres ejecutar cuando se llame a la función de destino en el contrato de origen de datos. +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. -### Función Mapeo +### Mapping Function -Cada handler de llamadas toma un solo parámetro que tiene un tipo correspondiente al nombre de la función llamada. En el subgrafo de ejemplo anterior, el mapeo contiene un handler para cuando la función `createGravatar` es llamada y recibe un parámetro `CreateGravatarCall` como argumento: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -743,22 +743,22 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -La función `handleCreateGravatar` toma una nueva `CreateGravatarCall` que es una subclase de `ethereum.Call`, proporcionada por `@graphprotocol/graph-ts`, que incluye las entradas y salidas tipificadas de la llamada. El tipo `CreateGravatarCall` se genera por ti cuando ejecutas `graph codegen`. +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. -## Handlers de Bloques +## Block Handlers -Además de suscribirse a eventos del contracto o llamadas a funciones, un subgrafo puede querer actualizar sus datos a medida que se añaden nuevos bloques a la cadena. Para ello, un subgrafo puede ejecutar una función después de cada bloque o después de los bloques que coincidan con un filtro predefinido. +In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. -### Filtros Admitidos +### Supported Filters ```yaml filter: kind: call ``` -_El handler definido será llamado una vez por cada bloque que contenga una llamada al contrato (fuente de datos) bajo el cual está definido el handler._ +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -La ausencia de un filtro para un handler de bloque asegurará que el handler sea llamado en cada bloque. Una fuente de datos sólo puede contener un handler de bloque para cada tipo de filtro. +The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. ```yaml dataSources: @@ -785,9 +785,9 @@ dataSources: kind: call ``` -### Función de Mapeo +### Mapping Function -La función de mapeo recibirá un `ethereum.Block` como único argumento. Al igual que las funciones de mapeo de eventos, esta función puede acceder a las entidades del subgrafo existentes en el almacén, llamar a los contratos inteligentes y crear o actualizar entidades. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -799,9 +799,9 @@ export function handleBlock(block: ethereum.Block): void { } ``` -## Eventos Anónimos +## Anonymous Events -Si necesitas procesar eventos anónimos en Solidity, puedes hacerlo proporcionando el tema 0 del evento, como en el ejemplo: +If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: ```yaml eventHandlers: @@ -810,20 +810,20 @@ eventHandlers: handler: handleGive ``` -Un evento sólo se activará cuando la firma y el tema 0 coincidan. Por defecto, `topic0` es igual al hash de la firma del evento. +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. -## Características experimentales +## Experimental features -Las características del subgrafo que parten de `specVersion` `0.0.4` deben declararse explícitamente en la sección `features` del nivel superior del archivo del manifiesto, utilizando su nombre `camelCase`, como se indica en la tabla siguiente: +Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: -| Característica | Nombre | +| Feature | Name | | --------------------------------------------------------- | ------------------------- | | [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | | [IPFS on Ethereum Contracts](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | -Por ejemplo, si un subgrafo utiliza las características **Full-Text Search** y **Non-fatal Errors**, el campo `features` del manifiesto debería ser: +For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml specVersion: 0.0.4 @@ -834,27 +834,27 @@ features: dataSources: ... ``` -Ten en cuenta que el uso de una característica sin declararla incurrirá en un **error de validación** durante el despliegue del subgrafo, pero no se producirá ningún error si se declara una característica pero no se utiliza. +Note that using a feature without declaring it will incur in a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. -### IPFS en Contratos de Ethereum +### IPFS on Ethereum Contracts -Un caso de uso común para combinar IPFS con Ethereum es almacenar datos en IPFS que serían demasiado costosos de mantener en la cadena, y hacer referencia al hash de IPFS en los contratos de Ethereum. +A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. -Dados estos hashes de IPFS, los subgrafos pueden leer los archivos correspondientes desde IPFS utilizando `ipfs.cat` y `ipfs.map`. Sin embargo, para hacer esto de forma fiable, es necesario que estos archivos estén anclados en el nodo IPFS al que se conecta the Graph Node que indexa el subgrafo. En el caso del [hosted service](https://thegraph.com/hosted-service), es [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). -> **Nota:** The Graph Network todavía no admite `ipfs.cat` y `ipfs.map`, y los desarrolladores no deben desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. +> **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. -Para facilitar esto a los desarrolladores de subgrafos, el equipo de The Graph escribió una herramienta para transferir archivos de un nodo IPFS a otro, llamada [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). +In order to make this easy for subgraph developers, The Graph team wrote a tool for transfering files from one IPFS node to another, called [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). -> **[La Gestión de Funciones](#experimental-features):** `ipfsOnEthereumContracts` debe declararse en `funciones` en el manifiesto del subgrafo. +> **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. -### Errores no fatales +### Non-fatal errors -Los errores de indexación en subgrafos ya sincronizados harán que, por defecto, el subgrafo falle y deje de sincronizarse. Los subgrafos pueden ser configurados alternativamente para continuar la sincronización en presencia de errores, ignorando los cambios realizados por el handler que provocó el error. Esto da a los autores de subgrafos tiempo para corregir sus subgrafos mientras las consultas siguen siendo servidas contra el último bloque, aunque los resultados serán posiblemente inconsistentes debido al fallo que causó el error. Ten en cuenta que algunos errores siguen siendo siempre fatales, para que el error no sea fatal debe saberse que es determinista. +Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. -> **Nota:** The Graph Network todavía no admite errores no fatales, y los desarrolladores no deben desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. -La activación de los errores no fatales requiere el establecimiento de la siguiente bandera de características en el manifiesto del subgrafo: +Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: ```yaml specVersion: 0.0.4 @@ -864,7 +864,7 @@ features: ... ``` -La consulta también debe optar por consultar datos con posibles inconsistencias a través del argumento `subgraphError`. También se recomienda consultar `_meta` para comprobar si el subgrafo ha saltado los errores, como en el ejemplo: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -876,7 +876,7 @@ _meta { } ``` -Si el subgrafo encuentra un error esa consulta devolverá tanto los datos como un error de graphql con el mensaje `"indexing_error"`, como en este ejemplo de respuesta: +If the subgraph encounters an error that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -896,13 +896,13 @@ Si el subgrafo encuentra un error esa consulta devolverá tanto los datos como u ] ``` -### Grafting en Subgrafos Existentes +### Grafting onto Existing Subgraphs -Cuando un subgrafo se despliega por primera vez, comienza a indexar eventos en el bloque génesis de la cadena correspondiente (o en el `startBlock` definido con cada fuente de datos) En algunas circunstancias, es beneficioso reutilizar los datos de un subgrafo existente y comenzar a indexar en un bloque mucho más tarde. Este modo de indexación se denomina _Grafting_. El grafting es, por ejemplo, útil durante el desarrollo para superar rápidamente errores simples en los mapeos, o para hacer funcionar temporalmente un subgrafo existente después de que haya fallado. +When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. -> **Nota:** El grafting requiere que el indexador haya indexado el subgrafo base. No se recomienda en The Graph Network en este momento, y los desarrolladores no deberían desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. +> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. -Un subgrafo se injerta en un subgrafo base cuando el manifiesto del subgrafo en `subgraph.yaml` contiene un bloque `graft` en el nivel superior: +A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: ```yaml description: ... @@ -911,18 +911,18 @@ graft: block: 7345624 # Block number ``` -Cuando se despliega un subgrafo cuyo manifiesto contiene un bloque `graft`, Graph Node copiará los datos del subgrafo `base` hasta e incluyendo el `block` dado y luego continuará indexando el nuevo subgrafo a partir de ese bloque. El subgrafo base debe existir en el target de Graph Node de destino y debe haber indexado hasta al menos el bloque dado. Debido a esta restricción, el grafting sólo debería utilizarse durante el desarrollo o durante una emergencia para acelerar la producción de un subgrafo equivalente no grafted. +When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. -Dado que el grafting copia en lugar de indexar los datos de base, es mucho más rápido llevar el subgrafo al bloque deseado que indexar desde cero, aunque la copia inicial de los datos puede tardar varias horas en el caso de subgrafos muy grandes. Mientras se inicializa el subgrafo grafteado, the Graph Node registrará información sobre los tipos de entidad que ya han sido copiados. +Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al del subgrafo base, sino simplemente compatible con él. Tiene que ser un esquema de subgrafo válido por sí mismo, pero puede desviarse del esquema del subgrafo base de las siguientes maneras: +The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: -- Agrega o elimina tipos de entidades -- Elimina los atributos de los tipos de entidad -- Agrega atributos anulables a los tipos de entidad -- Convierte los atributos no anulables en atributos anulables -- Añade valores a los enums -- Agrega o elimina interfaces -- Cambia para qué tipos de entidades se implementa una interfaz +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented -> **[La gestión de características](#experimental-features):** `grafting` se declara en `features` en el manifiesto del subgrafo. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. From 197165d7c40a2ff2d48a918828e800fb59514cf8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:43 -0500 Subject: [PATCH 024/241] New translations assemblyscript-api.mdx (Japanese) --- pages/ja/developer/assemblyscript-api.mdx | 300 +++++++++++----------- 1 file changed, 150 insertions(+), 150 deletions(-) diff --git a/pages/ja/developer/assemblyscript-api.mdx b/pages/ja/developer/assemblyscript-api.mdx index 0069310090ce..2afa431fe8c5 100644 --- a/pages/ja/developer/assemblyscript-api.mdx +++ b/pages/ja/developer/assemblyscript-api.mdx @@ -2,75 +2,75 @@ title: AssemblyScript API --- -> Note: `graph-cli`/`graph-ts` version `0.22.0`より前にサブグラフを作成した場合、古いバージョンの AssemblyScript を使用しているので、[`Migration Guide`](/developer/assemblyscript-migration-guide)を参照することをお勧めします。 +> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) -このページでは、サブグラフのマッピングを記述する際に、どのような組み込み API を使用できるかを説明します。 すぐに使える API は 2 種類あります: +This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: -- [Graph TypeScript ライブラリ](https://github.com/graphprotocol/graph-ts) (`graph-ts`)と -- `graph codegen`によってサブグラフファイルから生成されたコードです。 +- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and +- code generated from subgraph files by `graph codegen`. -また、[AssemblyScript](https://github.com/AssemblyScript/assemblyscript)との互換性があれば、他のライブラリを依存関係に追加することも可能です。 マッピングはこの言語で書かれているので、言語や標準ライブラリの機能については、 [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki)が参考になります。 +It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. -## インストール +## Installation -[`graph init`](/developer/create-subgraph-hosted)で作成されたサブグラフには、あらかじめ設定された依存関係があります。 これらの依存関係をインストールするために必要なのは、以下のコマンドのいずれかを実行することです: +Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: ```sh yarn install # Yarn -npm install # NPM +npm install # NPM ``` -サブグラフが最初から作成されている場合は、次の 2 つのコマンドのいずれかを実行すると、Graph TypeScript ライブラリが依存関係としてインストールされます: +If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: ```sh -yarn add --dev @graphprotocol/graph-ts # Yarn -npm install -save-dev @graphprotocol/graph-ts # NPM +yarn add --dev @graphprotocol/graph-ts # Yarn +npm install --save-dev @graphprotocol/graph-ts # NPM ``` -## API リファレンス +## API Reference -`@graphprotocol/graph-ts`ライブラリは、以下の API を提供しています: +The `@graphprotocol/graph-ts` library provides the following APIs: -- Ethereum スマートコントラクト、イベント、ブロック、トランザクション、Ethereum の値を扱うための`ethereum`API -- エンティティをグラフノードのストアからロードしたり、ストアに保存したりする`store`API -- Graph Node の出力や Graph Explorer にメッセージを記録するための`log`API です -- IPFS からファイルをロードする`ipfs`API -- JSON データを解析するための`json`API -- 暗号機能を使用するための`crypto`API -- Ethereum、JSON、GraphQL、AssemblyScript など、異なるタイプのシステム間で変換するための低レベルプリミティブ +- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. +- A `store` API to load and save entities from and to the Graph Node store. +- A `log` API to log messages to the Graph Node output and the Graph Explorer. +- An `ipfs` API to load files from IPFS. +- A `json` API to parse JSON data. +- A `crypto` API to use cryptographic functions. +- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. -### バージョン +### Versions -サブグラフマニフェストの`apiVersion` は、指定されたサブグラフに対してグラフノードが実行するマッピング API のバージョンを指定します。 現在のマッピング API のバージョンは 0.0.6 です。 +The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| バージョン | リリースノート | -| :-: | --- | -| 0.0.6 | Ethereum Transaction オブジェクトに`nonce`フィールドを追加 イーサリアムブロックオブジェクトに
Added `baseFeePerGas`を追加 | -| 0.0.5 | AssemblyScript がバージョン 0.19.10 にアップグレード(変更点がありますので[`Migration Guide`](/developer/assemblyscript-migration-guide))をご覧ください)。
`ethereum.transaction.gasUsed`の名前が`ethereum.transaction.gasLimit`に変更 | -| 0.0.4 | Ethereum SmartContractCall オブジェクトに`functionSignature`フィールドを追加 | -| 0.0.3 | Ethereum Call オブジェクトに`from`フィールドを追加
`etherem.call.address`の名前を `ethereum.call.to`に変更 | -| 0.0.2 | Ethereum Transaction オブジェクトに `input`フィールドを追加 | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | -### 組み込み型 +### Built-in Types -AssemblyScript に組み込まれている基本型のドキュメントは[AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types)にあります。 +Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). -以下の追加型は`@graphprotocol/graph-ts`で提供されています。 +The following additional types are provided by `@graphprotocol/graph-ts`. -#### バイト配列 +#### ByteArray ```typescript -'@graphprotocol/graph-ts'から{ ByteArray } をインポートします。 +import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray`は、`u8`の配列を表します。 +`ByteArray` represents an array of `u8`. -_構造_ +_Construction_ - `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. - `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. -_型変換_ +_Type conversions_ - `toHexString(): string` - Converts to a hex string prefixed with `0x`. - `toString(): string` - Interprets the bytes as a UTF-8 string. @@ -78,66 +78,66 @@ _型変換_ - `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. - `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. -_オペレーター_ +_Operators_ - `equals(y: ByteArray): bool` – can be written as `x == y`. #### BigDecimal ```typescript -'@graphprotocol/graph-ts'から { BigDecimal } をインポートします。 +import { BigDecimal } from '@graphprotocol/graph-ts' ``` -`BigDecimal`は、任意の精度の小数を表現するために使用されます。 +`BigDecimal` is used to represent arbitrary precision decimals. -_構造_ +_Construction_ - `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. - `static fromString(s: string): BigDecimal` – parses from a decimal string. -_型変換_ +_Type conversions_ - `toString(): string` – prints to a decimal string. -_数学_ - -- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y` -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y` -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y` -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y` -- `equals(y: BigDecimal): bool` – can be written as `x == y` -- `notEqual(y: BigDecimal): bool` – can be written as `x != y` -- `lt(y: BigDecimal): bool` – can be written as `x < y` -- `le(y: BigDecimal): bool` – can be written as `x <= y` -- `gt(y: BigDecimal): bool` – can be written as `x > y` -- `ge(y: BigDecimal): bool` – can be written as `x >= y` +_Math_ + +- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. +- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. +- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. +- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. +- `equals(y: BigDecimal): bool` – can be written as `x == y`. +- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. +- `lt(y: BigDecimal): bool` – can be written as `x < y`. +- `le(y: BigDecimal): bool` – can be written as `x <= y`. +- `gt(y: BigDecimal): bool` – can be written as `x > y`. +- `ge(y: BigDecimal): bool` – can be written as `x >= y`. - `neg(): BigDecimal` - can be written as `-x`. #### BigInt ```typescript -'@graphprotocol/graph-ts'から { BigInt } をインポートします。 +import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt`は大きな整数を表すのに使われます。 これには、Ethereum の`uint32`~`uint256` 、`int64` ~`int256`の値が含まれます。 `uint32`、`int32`、`uint24`、`int8`以下のものはすべて`i32`で表されます。 +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. -`BigInt`クラスの API は以下の通りです。 +The `BigInt` class has the following API: -_構造_ +_Construction_ -- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32` -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. - `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. - _型変換_ + _Type conversions_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. - `x.toString(): string` – turns `BigInt` into a decimal number string. - `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. - `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. -_数学_ +_Math_ - `x.plus(y: BigInt): BigInt` – can be written as `x + y`. - `x.minus(y: BigInt): BigInt` – can be written as `x - y`. @@ -164,12 +164,12 @@ _数学_ #### TypedMap ```typescript -'@graphprotocol/graph-ts'から { TypedMap } をインポートします。 +import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap` はキーと値のペアを格納するために使用することができます。 [この例](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51)を参照してください。 +`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). -TypedMap クラスは以下のような API を持っています。 +The `TypedMap` class has the following API: - `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` - `map.set(key: K, value: V): void` – sets the value of `key` to `value` @@ -180,12 +180,12 @@ TypedMap クラスは以下のような API を持っています。 #### Bytes ```typescript -'@graphprotocol/graph-ts'から { Bytes } をインポートします。 +import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes` は、任意の長さの bytes 配列を表すために使用されます。 これには、Ethereum の `bytes`、`bytes32` などの型の値が含まれます。 +`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. -`Bytes`クラスは AssemblyScript の[Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64)を継承しており、`Uint8Array` のすべての機能に加えて、以下の新しいメソッドをサポートしています。 +The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: - `b.toHex()` – returns a hexadecimal string representing the bytes in the array - `b.toString()` – converts the bytes in the array to a string of unicode characters @@ -194,28 +194,28 @@ TypedMap クラスは以下のような API を持っています。 #### Address ```typescript -'@graphprotocol/graph-ts'から { Address } をインポートします。 +import { Address } from '@graphprotocol/graph-ts' ``` -`Address`は Ethereum の`address`値を表現するために`Bytes`を拡張しています。 +`Address` extends `Bytes` to represent Ethereum `address` values. -`Bytes`の API の上に以下のメソッドを追加しています。 +It adds the following method on top of the `Bytes` API: - `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string ### Store API ```typescript -'@graphprotocol/graph-ts'から { store } をインポートします。 +import { store } from '@graphprotocol/graph-ts' ``` -`store` API は、グラフノードのストアにエンティティを読み込んだり、保存したり、削除したりすることができます。 +The `store` API allows to load, save and remove entities from and to the Graph Node store. -ストアに書き込まれたエンティティは、サブグラフの GraphQL スキーマで定義された`@entity`タイプに一対一でマッピングされます。 これらのエンティティの扱いを便利にするために、[Graph CLI](https://github.com/graphprotocol/graph-cli)で提供される `graph codegen` コマンドは、組み込みの`Entity`型のサブクラスであるエンティティ・クラスを生成します。 +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. -#### エンティティの作成 +#### Creating entities -Ethereum のイベントからエンティティを作成する際の一般的なパターンを以下に示します。 +The following is a common pattern for creating entities from Ethereum events. ```typescript // Import the Transfer event class generated from the ERC20 ABI @@ -241,13 +241,13 @@ export function handleTransfer(event: TransferEvent): void { } ``` -チェーンの処理中に`Transfer` イベントが発生すると、生成された`Transfer`タイプ(ここではエンティティタイプとの名前の衝突を避けるために`TransferEvent`とエイリアスされています)を使用して、`handleTransfer`イベントハンドラに渡されます。 このタイプでは、イベントの親トランザクションやそのパラメータなどのデータにアクセスすることができます。 +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -各エンティティは、他のエンティティとの衝突を避けるために、ユニークな ID を持たなければなりません。 イベントのパラメータには、使用可能な一意の識別子が含まれているのが一般的です。 注:トランザクションのハッシュを ID として使用することは、同じトランザクション内の他のイベントがこのハッシュを ID としてエンティティを作成しないことを前提としています。 +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. -#### ストアからのエンティティの読み込み +#### Loading entities from the store -エンティティがすでに存在する場合、以下の方法でストアからロードすることができます。 +If an entity already exists, it can be loaded from the store with the following: ```typescript let id = event.transaction.hash.toHex() // or however the ID is constructed @@ -259,18 +259,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -エンティティはまだストアに存在していない可能性があるため、`load`メソッドは`Transfer | null`型の値を返します。 そのため、値を使用する前に、`null`のケースをチェックする必要があるかもしれません。 +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. -> **Note:** エンティティのロードは、マッピングでの変更がエンティティの以前のデータに依存する場合にのみ必要です。 既存のエンティティを更新する 2 つの方法については、次のセクションを参照してください。 +> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. -#### 既存のエンティティの更新 +#### Updating existing entities -既存のエンティティを更新するには 2 つの方法があります。 +There are two ways to update an existing entity: -1. `Transfer.load(id)`などでエンティティをロードし、エンティティにプロパティを設定した後、`.save()`でストアに戻す。 -2. 単純に`new Transfer(id)`でエンティティを作成し、エンティティにプロパティを設定し、ストアに `.save()`します。 エンティティがすでに存在する場合は、変更内容がマージされます。 +1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. -プロパティの変更は、生成されたプロパティセッターのおかげで、ほとんどの場合、簡単です。 +Changing properties is straight forward in most cases, thanks to the generated property setters: ```typescript let transfer = new Transfer(id) @@ -279,16 +279,16 @@ transfer.to = ... transfer.amount = ... ``` -また、次の 2 つの命令のいずれかで、プロパティの設定を解除することも可能です。 +It is also possible to unset properties with one of the following two instructions: ```typescript transfer.from.unset() transfer.from = null ``` -これは、オプションのプロパティ、つまり GraphQL で`!`を付けずに宣言されているプロパティでのみ機能します。 例としては、`owner: Bytes`や`amount: BigInt`です。 +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. -エンティティから配列を取得すると、その配列のコピーが作成されるため、配列のプロパティの更新には少し手間がかかります。 つまり、配列を変更した後は、明示的に配列のプロパティを設定し直す必要があります。 次の例では、`entity` が `numbers: [BigInt!]!` を持っていると仮定します。 +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. ```typescript // This won't work @@ -302,9 +302,9 @@ entity.numbers = numbers entity.save() ``` -#### ストアからのエンティティの削除 +#### Removing entities from the store -現在、生成された型を使ってエンティティを削除する方法はありません。 代わりに、エンティティを削除するには、エンティティタイプの名前とエンティティ ID を`store.remove`に渡す必要があります。 +There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' @@ -315,15 +315,15 @@ store.remove('Transfer', id) ### Ethereum API -Ethereum API は、スマートコントラクト、パブリックステート変数、コントラクト関数、イベント、トランザクション、ブロック、および Ethereum データのエンコード/デコードへのアクセスを提供します。 +The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. -#### Ethereum タイプのサポート +#### Support for Ethereum Types -エンティティと同様に、`graph codegen`は、サブグラフで使用されるすべてのスマートコントラクトとイベントのためのクラスを生成します。 このためには、コントラクト ABI がサブグラフマニフェストのデータソースの一部である必要があります。 通常、ABI ファイルは`abis/`フォルダに格納されています。 +As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -生成されたクラスでは、Ethereum 型と [組み込み型](#built-in-types)の間の変換が背後で行われるため、サブグラフの作成者はそれらを気にする必要がありません。 +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. -以下の例で説明します。 以下のようなサブグラフのスキーマが与えられます。 +The following example illustrates this. Given a subgraph schema like ```graphql type Transfer @entity { @@ -344,9 +344,9 @@ transfer.amount = event.params.amount transfer.save() ``` -#### イベントとブロック/トランザクションデータ +#### Events and Block/Transaction Data -前述の例の`Transfer`イベントのように、イベントハンドラに渡された Ethereum イベントは、イベントパラメータへのアクセスだけでなく、その親となるトランザクションや、それらが属するブロックへのアクセスも提供します。 `event` インスタンスからは、以下のデータを取得することができます(これらのクラスは、 `graph-ts`の`ethereum`モジュールの一部です)。 +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): ```typescript class Event { @@ -390,11 +390,11 @@ class Transaction { } ``` -#### スマートコントラクトの状態へのアクセス +#### Access to Smart Contract State -`graph codegen`が生成するコードには、サブグラフで使用されるスマートコントラクトのクラスも含まれています。 これらを使って、パブリックな状態変数にアクセスしたり、現在のブロックにあるコントラクトの関数を呼び出したりすることができます。 +The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. -よくあるパターンは、イベントが発生したコントラクトにアクセスすることです。 これは以下のコードで実現できます。 +A common pattern is to access the contract from which an event originates. This is achieved with the following code: ```typescript // Import the generated contract class @@ -411,13 +411,13 @@ export function handleTransfer(event: Transfer) { } ``` -Ethereum の `ERC20Contract`に`symbol`というパブリックな読み取り専用の関数があれば、`.symbol()`で呼び出すことができます。 パブリックな状態変数については、同じ名前のメソッドが自動的に作成されます。 +As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -サブグラフの一部である他のコントラクトは、生成されたコードからインポートすることができ、有効なアドレスにバインドすることができます。 +Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. -#### リバートされた呼び出しの処理 +#### Handling Reverted Calls -コントラクトの読み取り専用メソッドが復帰する可能性がある場合は、`try_`を前置して生成されたコントラクトメソッドを呼び出すことで対処しなければなりません。 例えば、Gravity コントラクトでは`gravatarToOwner`メソッドを公開しています。 このコードでは、そのメソッドの復帰を処理することができます。 +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -429,11 +429,11 @@ if (callResult.reverted) { } ``` -ただし、Geth や Infura のクライアントに接続された Graph ノードでは、すべてのリバートを検出できない場合があるので、これに依存する場合は Parity のクライアントに接続された Graph ノードを使用することをお勧めします。 +Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. -#### 符号化/復号化 ABI +#### Encoding/Decoding ABI -`ethereum`モジュールの`encode`/ `decode`関数を使用して、Ethereum の ABI エンコーディングフォーマットに従ってデータをエンコード/デコードすることができます。 +Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -450,7 +450,7 @@ let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` -その他の情報: +For more information: - [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) - Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) @@ -459,12 +459,12 @@ let decoded = ethereum.decode('(address,uint256)', encoded) ### Logging API ```typescript -'@graphprotocol/graph-ts'から{ log } をインポートします。 +import { log } from '@graphprotocol/graph-ts' ``` -`log` API は、サブグラフがグラフノードの標準出力やグラフエクスプローラに情報を記録するためのものです。 メッセージは、異なるログレベルを使って記録することができます。 基本的なフォーマット文字列の構文が提供されており、引数からログメッセージを構成することができます。 +The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. -`log` API には以下の機能があります: +The `log` API includes the following functions: - `log.debug(fmt: string, args: Array): void` - logs a debug message. - `log.info(fmt: string, args: Array): void` - logs an informational message. @@ -472,17 +472,17 @@ let decoded = ethereum.decode('(address,uint256)', encoded) - `log.error(fmt: string, args: Array): void` - logs an error message. - `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. -`log` API は、フォーマット文字列と文字列値の配列を受け取ります。 そして、プレースホルダーを配列の文字列値で置き換えます。 最初の`{}`プレースホルダーは配列の最初の値に置き換えられ、2 番目の`{}`プレースホルダーは 2 番目の値に置き換えられ、以下のようになります。 +The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript -log.info('表示されるメッセージ。{}, {}, {}', [value.toString(), anotherValue.toString(), 'すでに文字列']) +log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) ``` -#### 1 つまたは複数の値を記録する +#### Logging one or more values -##### 1 つの値を記録する +##### Logging a single value -以下の例では、文字列値 "A" を配列に渡して`['A']` にしてからログに記録しています。 +In the example below, the string value "A" is passed into an array to become`['A']` before being logged: ```typescript let myValue = 'A' @@ -493,9 +493,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### 既存の配列から 1 つのエントリをロギングする +##### Logging a single entry from an existing array -以下の例では、配列に 3 つの値が含まれているにもかかわらず、引数の配列の最初の値だけがログに記録されます。 +In the example below, only the first value of the argument array is logged, despite the array containing three values. ```typescript let myArray = ['A', 'B', 'C'] @@ -506,9 +506,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -#### 既存の配列から複数のエントリを記録する +#### Logging multiple entries from an existing array -引数配列の各エントリは、ログメッセージ文字列に独自のプレースホルダー`{}`を必要とします。 以下の例では、ログメッセージに 3 つのプレースホルダー`{}`が含まれています。 このため、`myArray`の 3 つの値すべてがログに記録されます。 +Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. ```typescript let myArray = ['A', 'B', 'C'] @@ -519,9 +519,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### 既存の配列から特定のエントリをロギングする +##### Logging a specific entry from an existing array -配列内の特定の値を表示するには、インデックス化された値を指定する必要があります。 +To display a specific value in the array, the indexed value must be provided. ```typescript export function handleSomeEvent(event: SomeEvent): void { @@ -530,12 +530,12 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### イベント情報の記録 +##### Logging event information -以下の例では、イベントからブロック番号、ブロックハッシュ、トランザクションハッシュをログに記録しています。 +The example below logs the block number, block hash and transaction hash from an event: ```typescript -'@graphprotocol/graph-ts'から { log } をインポートします。 +import { log } from '@graphprotocol/graph-ts' export function handleSomeEvent(event: SomeEvent): void { log.debug('Block number: {}, block hash: {}, transaction hash: {}', [ @@ -549,12 +549,12 @@ export function handleSomeEvent(event: SomeEvent): void { ### IPFS API ```typescript -'@graphprotocol/graph-ts'から { ipfs } をインポートします。 +import { ipfs } from '@graphprotocol/graph-ts' ``` -スマートコントラクトは時折、チェーン上の IPFS ファイルをアンカリングします。 これにより、マッピングはコントラクトから IPFS ハッシュを取得し、IPFS から対応するファイルを読み取ることができます。 ファイルのデータは`Bytes`として返されますが、通常は、このページで後述する `json` API などを使ってさらに処理する必要があります。 +Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. -IPFS のハッシュやパスが与えられた場合、IPFS からのファイルの読み込みは以下のように行われます。 +Given an IPFS hash or path, reading a file from IPFS is done as follows: ```typescript // Put this inside an event handler in the mapping @@ -567,9 +567,9 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**注意:** `ipfs.cat` は現時点では決定論的ではありません。 このため、結果に`null`が含まれていないかどうかを常にチェックする必要があります。 リクエストがタイムアウトする前に、Ipfs ネットワーク上でファイルを取得できない場合は、`null`が返されます。 ファイルを確実に取得するためには、グラフノードが接続する IPFS ノードにファイルを固定する必要があります。 [hosted service](https://thegraph.com/hosted-service)では、[https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs)です。 詳細は、[IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) のセクションを参照してください。 +**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. -また、`ipfs.map`.を使って、大きなファイルをストリーミングで処理することも可能です。 この関数は、IPFS ファイルのハッシュまたはパス、コールバックの名前、そして動作を変更するためのフラグを受け取ります。 +It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -599,34 +599,34 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -現在サポートされている唯一のフラグは`json`で、これを`ipfs.map`に渡さなければなりません。 `json`フラグを使用すると、IPFS ファイルは一連の JSON 値で構成され、1 行に 1 つの値が必要です。 `ipfs.map`への呼び出しは、ファイルの各行を読み込み、`JSONValue`にデシリアライズし、それぞれのコールバックを呼び出します。 コールバックは、エンティティ・オペレーションを使って、`JSONValue`からデータを保存することができます。 エンティティの変更は、`ipfs.map`を呼び出したハンドラが正常に終了したときにのみ保存されます。それまでの間は、メモリ上に保持されるため、`ipfs.map`が処理できるファイルのサイズは制限されます。 +The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -成功すると,`ipfs.map`は `void`を返します。 コールバックの呼び出しでエラーが発生した場合、`ipfs.map`を呼び出したハンドラは中止され、サブグラフは失敗とマークされます。 +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. ### Crypto API ```typescript -'@graphprotocol/graph-ts'から { crypto } をインポートします。 +import { crypto } from '@graphprotocol/graph-ts' ``` -`crypto` API は、マッピングで使用できる暗号化関数を提供します。 今のところ、1 つしかありません。 +The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: - `crypto.keccak256(input: ByteArray): ByteArray` ### JSON API ```typescript -'@graphprotocol/graph-ts'から{ json, JSONValueKind } をインポートします。 +import { json, JSONValueKind } from '@graphprotocol/graph-ts' ``` -JSON データは、`json` API を使って解析することができます。 +JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence - `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -`JSONValue` クラスは、任意の JSON ドキュメントから値を引き出す方法を提供します。 JSON の値には、ブーリアン、数値、配列などがあるため、`JSONValue`には、値の種類をチェックするための`kind`プロパティが付属しています。 +The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: ```typescript let value = json.fromBytes(...) @@ -635,11 +635,11 @@ if (value.kind == JSONValueKind.BOOL) { } ``` -さらに、値が`null`かどうかをチェックするメソッドもあります: +In addition, there is a method to check if the value is `null`: - `value.isNull(): boolean` -値の型が確定している場合は,以下のいずれかの方法で[組み込み型](#built-in-types)に変換することができます。 +When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: - `value.toBool(): boolean` - `value.toI64(): i64` @@ -648,7 +648,7 @@ if (value.kind == JSONValueKind.BOOL) { - `value.toString(): string` - `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) -### タイプ 変換参照 +### Type Conversions Reference | Source(s) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | @@ -688,17 +688,17 @@ if (value.kind == JSONValueKind.BOOL) { | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### データソースのメタデータ +### Data Source Metadata -ハンドラを起動した`データソース`のコントラクトアドレス、ネットワーク、コンテキストは、以下のようにして調べることができます。 +You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### エンティティと DataSourceContext +### Entity and DataSourceContext -ベースとなる`エンティティ`クラスと子クラスの`DataSourceContext`クラスには、フィールドを動的に設定・取得するためのヘルパーが用意されています。 +The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From 5d5db48ee2faa17665927b1f6c1d509c66848b68 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:44 -0500 Subject: [PATCH 025/241] New translations assemblyscript-api.mdx (Korean) --- pages/ko/developer/assemblyscript-api.mdx | 300 +++++++++++----------- 1 file changed, 150 insertions(+), 150 deletions(-) diff --git a/pages/ko/developer/assemblyscript-api.mdx b/pages/ko/developer/assemblyscript-api.mdx index 7ddd83d90288..2afa431fe8c5 100644 --- a/pages/ko/developer/assemblyscript-api.mdx +++ b/pages/ko/developer/assemblyscript-api.mdx @@ -2,220 +2,220 @@ title: AssemblyScript API --- -> 참고: 만약 `graph-cli`/`graph-ts` 버전 `0.22.0` 이전의 서브그래프를 생성하는 경우, 이전 버젼의 AssemblyScript를 사용중인 경우, [`Migration Guide`](/developer/assemblyscript-migration-guide)를 참고하시길 권장드립니다. +> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) -이 페이지는 서브그래프 매핑을 작성할 때 사용할 수 있는 내장 API를 설명합니다. 다음 두 가지 종류의 API를 즉시 사용할 수 있습니다 : +This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: -- [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) 그리고 -- `graph codegen`에 의해 서브그래프 파일들에서 생성된 코드 +- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and +- code generated from subgraph files by `graph codegen`. -[AssemblyScript](https://github.com/AssemblyScript/assemblyscript)와 호환되는 한 다른 라이브러리들을 의존성(dependencies)으로서 추가할 수도 있습니다. 이것은 언어 매핑이 작성되기 때문에 [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) 위키는 언어 및 표준 라이브러리 기능과 관련한 좋은 소스입니다. +It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. -## 설치 +## Installation -[`graph init`](/developer/create-subgraph-hosted)로 생성된 서브그래프는 미리 구성된 의존성들(dependencies)을 함께 동반합니다. 이러한 의존성들을 설치하려면 다음 명령 중 하나를 실행해야 합니다. +Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: ```sh yarn install # Yarn npm install # NPM ``` -서브그래프가 처음부터 만들어진 경우 다음 두 명령 중 하나가 의존성으로서 그래프 타입스크립트 라이브러리를 설치할 것입니다. +If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: ```sh yarn add --dev @graphprotocol/graph-ts # Yarn npm install --save-dev @graphprotocol/graph-ts # NPM ``` -## API 참조 +## API Reference -`@graphprotocol/graph-ts` 라이브러리가 다음과 같은 API들을 제공합니다. +The `@graphprotocol/graph-ts` library provides the following APIs: -- 이더리움 스마트 컨트렉트, 이벤트, 블록, 트랜젝션, 그리고 이더리움 값들과 작업하기 위한 `ethereum` API -- 더그래프 노드 스토어에서 엔티티를 로드하고 저장하기 위한 `store` API -- 더그래프 노드 출력 및 그래프 탐색기에 메세지를 기록하는 `log` API -- IPFS로부터 파일들을 로드하기 위한 `ipfs` API -- JSON 데이터를 구문 분석하는 `json` API -- 암호화 기능을 사용하기 위한 `crypto` API -- Ethereum, JSON, GraphQL 및 AssemblyScript와 같은 다양한 유형 시스템 간의 변환을 위한 저수준 프리미티브 +- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. +- A `store` API to load and save entities from and to the Graph Node store. +- A `log` API to log messages to the Graph Node output and the Graph Explorer. +- An `ipfs` API to load files from IPFS. +- A `json` API to parse JSON data. +- A `crypto` API to use cryptographic functions. +- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. -### 버전 +### Versions -서브그래프 매니페스트의 `apiVersion`은 주어진 서브그래프에 대해 그래프 노드가 실행하는 매핑 API 버전을 지정합니다. 현재 맵핑 API 버전은 0.0.6 입니다. +The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. -| 버전 | 릴리스 노트 | -| :-: | --- | -| 0.0.6 | 이더리움 트랜잭션 개체에 `nonce` 필드를 추가했습니다.
`baseFeePerGas`가 이더리움 블록 개체에 추가되었습니다. | -| 0.0.5 | AssemblyScript를 버전 0.19.10으로 업그레이드했습니다(변경 내용 깨짐 포함. [`Migration Guide`](/developer/assemblyscript-migration-guide) 참조)
`ethereum.transaction.gasUsed`의 이름이 `ethereum.transaction.gasLimit`로 변경되었습니다. | -| 0.0.4 | Ethereum SmartContractCall 개체에 `functionSignature` 필드를 추가했습니다. | -| 0.0.3 | Ethereum Call 개체에 `from` 필드를 추가했습니다.
`etherem.call.address`의 이름이 `ethereum.call.to`로 변경되었습니다. | -| 0.0.2 | Ethereum Transaction 개체에 `input` 필드를 추가했습니다. | +| Version | Release notes | +|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | -### 기본 제공 유형 +### Built-in Types -AssemblyScript에 내장된 기본 유형에 대한 설명서는 [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types)에서 확인할 수 있습니다. +Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). -다음의 추가적인 유형들이 `@graphprotocol/graph-ts`에 의해 제공됩니다. +The following additional types are provided by `@graphprotocol/graph-ts`. #### ByteArray ```typescript -'@graphprotocol/graph-ts'에서 { ByteArray }를 입력합니다. +import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray`가 `u8`의 배열을 나타냅니다. +`ByteArray` represents an array of `u8`. _Construction_ -- `fromI32(x: i32): ByteArray` - `x`를 바이트로 분해합니다. -- `fromHexString(hex: string): ByteArray` - 입력 길이는 반드시 짝수여야 합니다. `0x` 접두사는 선택사항입니다. +- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. +- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. -_유형 변환_ +_Type conversions_ -- `toHexString(): string` - 접두사가 `0x`인 16진 문자열로 변환합니다. -- `toString(): string` - 바이트를 UTF-8 문자열로 해석합니다. -- `toBase58(): string` - 바이트를 base58 문자열로 인코딩합니다. -- `toU32(): u32` - 바이트를 little-endian `u32`로 해석합니다. 오버플로우의 경우에는 Throws 합니다. -- `toI32(): i32` - 바이트 배열을 little-endian `i32`로 해석합니다. 오버플로우의 경우에는 Throws 합니다. +- `toHexString(): string` - Converts to a hex string prefixed with `0x`. +- `toString(): string` - Interprets the bytes as a UTF-8 string. +- `toBase58(): string` - Encodes the bytes into a base58 string. +- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. +- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. -_연산자_ +_Operators_ -- `equals(y: ByteArray): bool` – `x == y`로 쓸 수 있습니다 +- `equals(y: ByteArray): bool` – can be written as `x == y`. #### BigDecimal ```typescript -'@graphprotocol/graph-ts'로 부터 { BigDecimal }을 입력합니다. +import { BigDecimal } from '@graphprotocol/graph-ts' ``` -`BigDecimal`은 임의의 정밀도 소수를 나타내는 데 사용됩니다. +`BigDecimal` is used to represent arbitrary precision decimals. _Construction_ -- `constructor(bigInt: BigInt)` – `BigInt`로 부터 `BigDecimal`을 생성합니다. -- `static fromString(s: string): BigDecimal` – 10진수 문자열에서 구문 분석을 수행합니다. +- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. +- `static fromString(s: string): BigDecimal` – parses from a decimal string. -_유형 변환_ +_Type conversions_ -- `toString(): string` – 10진수 문자열로 인쇄합니다. +- `toString(): string` – prints to a decimal string. _Math_ -- `plus(y: BigDecimal): BigDecimal` – `x + y`로 쓸 수 있습니다. -- `minus(y: BigDecimal): BigDecimal` – `x - y`로 쓸 수 있습니다. -- `times(y: BigDecimal): BigDecimal` – `x * y`로 쓸 수 있습니다. -- `div(y: BigDecimal): BigDecimal` – `x / y`로 쓸 수 있습니다. -- `equals(y: BigDecimal): bool` – `x == y`로 쓸 수 있습니다. -- `notEqual(y: BigDecimal): bool` – `x != y`로 쓸 수 있습니다. -- `lt(y: BigDecimal): bool` – `x < y`로 쓸 수 있습니다. -- `le(y: BigDecimal): bool` – `x <= y`로 쓸 수 있습니다. -- `gt(y: BigDecimal): bool` – `x > y`로 쓸 수 있습니다. -- `ge(y: BigDecimal): bool` – `x >= y`로 쓸 수 있습니다. -- `neg(): BigDecimal` - `-x`로 쓸 수 있습니다. +- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. +- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. +- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. +- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. +- `equals(y: BigDecimal): bool` – can be written as `x == y`. +- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. +- `lt(y: BigDecimal): bool` – can be written as `x < y`. +- `le(y: BigDecimal): bool` – can be written as `x <= y`. +- `gt(y: BigDecimal): bool` – can be written as `x > y`. +- `ge(y: BigDecimal): bool` – can be written as `x >= y`. +- `neg(): BigDecimal` - can be written as `-x`. #### BigInt ```typescript -'@graphprotocol/graph-ts'에서 { BigInt }를 입력합니다. +import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt`는 큰 정수를 나타내는 데 사용됩니다. 여기에는 `uint32` ~ `uint256` 및 `int64` ~ `int256`값이 포함됩니다. `int32`, `uint24` 혹은 `int8`과 같은 `uint32` 이하는 전부 `i32`로 표시됩니다. +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. -`BigInt` 클래스에는 다음의 API가 있습니다: +The `BigInt` class has the following API: _Construction_ -- `BigInt.fromI32(x: i32): BigInt` – `i32`로 부터 `BigInt`를 생성합니다. -- `BigInt.fromString(s: string): BigInt`– 문자열로부터 `BigInt`를 구문 분석합니다. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – `bytes`를 부호 없는 little-endian 정수로 해석합니다. 입력 값이 big-endian인 경우, 먼저 `.reverse()`를 호출하십시오. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – `bytes`를 signed, little-endian 정수로 해석합니다. 입력 값이 big-endian인 경우, 먼저 `.reverse()`를 호출하십시오. +- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. - _유형 변환_ + _Type conversions_ -- `x.toHex(): string` – `BigInt`를 16진수 문자열로 바꿉니다. -- `x.toString(): string` – `BigInt`를 10진수 문자열로 바꿉니다. -- `x.toI32(): i32` – `BigInt`를 `i32`로 반환합니다; 만약 값이 `i32`에 부합하지 않으면, 실패합니다. `x.isI32()`를 먼저 확인하는 것이 좋습니다. -- `x.toBigDecimal(): BigDecimal` - 소수 부분 없이 십진수로 변환합니다. +- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. +- `x.toString(): string` – turns `BigInt` into a decimal number string. +- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. +- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. _Math_ -- `x.plus(y: BigInt): BigInt` – `x + y`로 쓸 수 있습니다. -- `x.minus(y: BigInt): BigInt` – `x - y`로 쓸 수 있습니다. -- `x.times(y: BigInt): BigInt` – `x * y`로 쓸 수 있습니다. -- `x.div(y: BigInt): BigInt` – `x / y`로 쓸 수 있습니다. -- `x.mod(y: BigInt): BigInt` – `x % y`로 쓸 수 있습니다. -- `x.equals(y: BigInt): bool` – `x == y`로 쓸 수 있습니다. -- `x.notEqual(y: BigInt): bool` – `x != y`로 쓸 수 있습니다. -- `x.lt(y: BigInt): bool` – `x < y`로 쓸 수 있습니다. -- `x.le(y: BigInt): bool` – `x <= y`로 쓸 수 있습니다. -- `x.gt(y: BigInt): bool` – `x > y`로 쓸 수 있습니다. -- `x.ge(y: BigInt): bool` – `x >= y`로 쓸 수 있습니다. -- `x.neg(): BigInt` – `-x`로 쓸 수 있습니다. -- `x.divDecimal(y: BigDecimal): BigDecimal` – 십진수로 나누어, 십진 결과를 제공합니다. -- `x.isZero(): bool` – 숫자가 0인지 확인하는데 편리합니다. -- `x.isI32(): bool` – 숫자가 `i32`에 부합하는지 확인합니다. -- `x.abs(): BigInt` – 절대값. -- `x.pow(exp: u8): BigInt` – 지수화. -- `bitOr(x: BigInt, y: BigInt): BigInt` – `x | y`로 쓸 수 있습니다. -- `bitAnd(x: BigInt, y: BigInt): BigInt` – `x & y`로 쓸 수 있습니다. -- `leftShift(x: BigInt, bits: u8): BigInt` – `x << y`로 쓸 수 있습니다. -- `rightShift(x: BigInt, bits: u8): BigInt` – `x >> y`로 쓸 수 있습니다. +- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. +- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. +- `x.times(y: BigInt): BigInt` – can be written as `x * y`. +- `x.div(y: BigInt): BigInt` – can be written as `x / y`. +- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. +- `x.equals(y: BigInt): bool` – can be written as `x == y`. +- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. +- `x.lt(y: BigInt): bool` – can be written as `x < y`. +- `x.le(y: BigInt): bool` – can be written as `x <= y`. +- `x.gt(y: BigInt): bool` – can be written as `x > y`. +- `x.ge(y: BigInt): bool` – can be written as `x >= y`. +- `x.neg(): BigInt` – can be written as `-x`. +- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. +- `x.isZero(): bool` – Convenience for checking if the number is zero. +- `x.isI32(): bool` – Check if the number fits in an `i32`. +- `x.abs(): BigInt` – Absolute value. +- `x.pow(exp: u8): BigInt` – Exponentiation. +- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. +- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. +- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. #### TypedMap ```typescript -'@graphprotocol/graph-ts'에서 { TypedMap }를 입력합니다. +import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap`는 key-value 쌍을 저장하는데 사용될 수 있습니다. [이 예](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51)를 보시기 바랍니다. +`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). -`TypedMap` 클래스에는 다음의 API가 있습니다. +The `TypedMap` class has the following API: -- `new TypedMap()` – 유형 `K`의 키와 유형 `T`의 값을 사용하여 빈 맵을 생성합니다. -- `map.set(key: K, value: V): void` – `key` 값을 `value`로 설정합니다. -- `map.getEntry(key: K): TypedMapEntry | null` – 만약 `key`가 맵에 존재하지 않는 경우, `key` 혹은 `null` 에 대한 key-value 쌍을 반환합니다. -- `map.get(key: K): V | null` – 만약 `key`가 맵에 존재하지 않으면, `key` 혹은 `null` 값을 반환합니다. -- `map.isSet(key: K): bool` – 만약 `key`는 맵에 존재하나, `false`가 맵에 존재하지 않는 경우, `true`를 반환합니다. +- `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` +- `map.set(key: K, value: V): void` – sets the value of `key` to `value` +- `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map +- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map +- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not #### Bytes ```typescript -'@graphprotocol/graph-ts'에서 { Bytes }를 입력합니다. +import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes`는 임의 길이의 바이트 배열을 나타내는 데 사용됩니다. 이는 `bytes`, `bytes32` 등의 이더리움 값을 포함합니다. +`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. -`Bytes` 클래스는 AssemblyScript의 [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64)를 확장하며, 모든 `Uint8Array` 기능과 다음과 같은 새 매서드를 지원합니다: +The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: -- `b.toHex()` – 배열상의 바이트를 나타내는 16진수 문자열을 반환합니다. -- `b.toString()` – 배열상의 바이트를 유니코드 문자 문자열로 변환합니다. -- `b.toBase58()` – 이더리움 바이트 값을 base58 인코딩(IPFS 해시에 사용)으로 변환합니다. +- `b.toHex()` – returns a hexadecimal string representing the bytes in the array +- `b.toString()` – converts the bytes in the array to a string of unicode characters +- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) #### Address ```typescript -'@graphprotocol/graph-ts'에서 { Address } 를 입력합니다. +import { Address } from '@graphprotocol/graph-ts' ``` -`Address`는 `Bytes`를 확장하여 이더리움 `address` 값을 나타냅니다. +`Address` extends `Bytes` to represent Ethereum `address` values. -`Bytes` API 위에 다음 메서드를 추가합니다: +It adds the following method on top of the `Bytes` API: -- `Address.fromString(s: string): Address` – 16진수 문자열에서 `Address` 를 생성합니다. +- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string ### Store API ```typescript -'@graphprotocol/graph-ts'에서 { store }를 입력합니다. +import { store } from '@graphprotocol/graph-ts' ``` -`store` API 를 사용하면 더 그래프 노드 스토어에서 엔티티를 로드, 저장 및 제거할 수 있습니다. +The `store` API allows to load, save and remove entities from and to the Graph Node store. -스토어에 작성된 엔티티는 서브그래프의 GraphQL 스키마에 정의된 `@entity` 유형에 일대일로 매핑됩니다. 이러한 엔터티 작업을 편리하게 하기 위해 [Graph CLI](https://github.com/graphprotocol/graph-cli)에서 제공하는 `graph codegen` 명령은 기본 제공 `Entity` 유형의 서브 클래스인 엔터티 클래스를 생성하며, 스키마의 필드에 대한 속성 getter 및 setter와 이러한 엔티티를 로드 및 저장하는 메서드를 사용합니다. +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities -다음은 이더리움 이벤트에서 엔티티를 생성하기 위한 일반적인 패턴입니다. +The following is a common pattern for creating entities from Ethereum events. ```typescript // Import the Transfer event class generated from the ERC20 ABI @@ -241,13 +241,13 @@ export function handleTransfer(event: TransferEvent): void { } ``` -체인을 처리하는 동안 `Transfer` 이벤트가 발생하면, 이는 생성된 `Transfer` 유형(엔터티 유형과 이름 충돌이 발생하지 않도록 여기서 `TransferEvent`로 별칭 지정)을 사용하여 `handleTransfer` 이벤트 핸들러에 전달됩니다. 이 유형을 사용하면 이벤트의 상위 트랜잭션 및 해당 매개 변수와 같은 데이터에 액세스할 수 있습니다. +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -각 엔티티는 다른 엔티티와의 충돌을 피하기 위해 고유한 ID를 가져야 합니다. 이벤트 매개변수에 사용할 수 있는 고유 식별자가 포함되는 것은 매우 일반적입니다. 참고: 트랜잭션 해시를 ID로 사용하면 동일한 트랜잭션의 다른 이벤트가 이 해시를 ID로 사용하여 엔티티를 만들지 않는다고 가정합니다. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. -#### 스토어에서 엔티티 로드 +#### Loading entities from the store -엔티티가 이미 존재하는 경우, 이는 다음을 사용하여 스토어에서 로드할 수 있습니다. +If an entity already exists, it can be loaded from the store with the following: ```typescript let id = event.transaction.hash.toHex() // or however the ID is constructed @@ -259,18 +259,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -엔티티가 스토어에 아직 존재하지 않을 수도 있으므로, `load` 메서드는 `Transfer | null` 유형의 값을 반환합니다. 떠라서 해당 값을 사용하기 전에 `null` 케이스를 확인해야 할 수 있습니다. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. -> **Note:**: 매핑에서 변경한 내용이 엔티티의 이전 데이터에 종속된 경우에만 엔티티 로드가 필요합니다. 다음 섹션에서 기존 엔티티들을 업데이트하는 두 가지 방법을 확인하시기 바랍니다. +> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. -#### 기존 엔티티 업데이트 +#### Updating existing entities -기존 엔티티를 업데이트 하는 방법에는 두 가지가 있습니다. +There are two ways to update an existing entity: -1. 엔터티를 로드합니다. `Transfer.load(id)`를 예로들어, 엔터티의 속성을 설정한 다음, 스토어에 다시 `.save()`합니다. -2. `new Transfer(id)`를 예로 들어, 간단하게 엔티티를 생성하기만 하면 됩니다. 엔티티의 속성을 설정한 다음 이를 스토어에 `.save()` 합니다. 만약 엔티티가 이미 존재하는 경우, 변경사항들은 병합됩니다. +1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. -속성 변경은 생성된 속성 설정기 덕분에 대부분의 경우 간단합니다. +Changing properties is straight forward in most cases, thanks to the generated property setters: ```typescript let transfer = new Transfer(id) @@ -279,51 +279,51 @@ transfer.to = ... transfer.amount = ... ``` -다음 두 가지 지침 중 하나로 속성을 설정 해제할 수도 있습니다. +It is also possible to unset properties with one of the following two instructions: ```typescript transfer.from.unset() transfer.from = null ``` -이는 오직 선택적 속성으로만 작동하는데, 예를 들어 GraphQL에서 `!` 없이 표기된 속성들입니다. `owner: Bytes` 혹은 `amount: BigInt`를 두 가지 예로 들 수 있습니다. +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. -엔터티에서 배열을 가져오면 해당 배열의 복사본이 생성되기 때문에 배열 속성 업데이트는 조금 더 복잡합니다. 이는 배열을 변경한 후 명시적으로 배열 속성을 다시 설정해야 함을 의미합니다. 다음은 `entity`에 `numbers: [BigInt!]!` 필드가 있다고 가정합니다. +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. ```typescript -// 이는 작동하지 않을 것입니다. +// This won't work entity.numbers.push(BigInt.fromI32(1)) entity.save() -// 이는 작동 할 것입니다. +// This will work let numbers = entity.numbers numbers.push(BigInt.fromI32(1)) entity.numbers = numbers entity.save() ``` -#### 스토어에서 엔티티 제거하기 +#### Removing entities from the store -현재 생성된 유형을 통해 엔티티를 제거할 수 있는 방법은 없습니다. 대신 엔티티를 제거하려면 엔티티 유형의 이름과 엔티티 ID를 `store.remove`에 전달해야 합니다. +There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: ```typescript -'@graphprotocol/graph-ts'에서 { store }를 입력합니다. +import { store } from '@graphprotocol/graph-ts' ... let id = event.transaction.hash.toHex() store.remove('Transfer', id) ``` -### 이더리움 API +### Ethereum API -이더리움 API는 스마트 컨트렉트, 퍼블릭 상태 변수, 컨트렉트 기능, 이벤트, 트랜잭션, 블록 및 이더리움 데이터 인코딩/디코딩에 대한 액세스를 제공합니다. +The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. -#### 이더리움 유형 지원 +#### Support for Ethereum Types -엔터티와 마찬가지로 `graph codegen`은 서브그래프에서 사용되는 모든 스마트 컨트랙트 및 이벤트에 대한 클래스를 생성합니다. 이를 위해 컨트랙트 ABI는 서브그래프 매니페스트에서 데이터 소스의 일부여야 합니다. 일반적으로 ABI 파일은 `abis/` 폴더에 저장됩니다. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -생성된 클래스를 사용하면 이더리움 유형과 [내장 유형](#built-in-types) 간의 변환이 뒤에서 이루어지므로 서브그래프 작성자는 이에 대해 걱정할 필요가 없습니다. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. -다음의 예가 이를 보여줍니다. 다음과 같은 서브그래프 스키마가 주어지면 +The following example illustrates this. Given a subgraph schema like ```graphql type Transfer @entity { @@ -333,7 +333,7 @@ type Transfer @entity { } ``` -그리고 이더리움 상의 `Transfer(address,address,uint256)` 이벤트 서명, `from`, `to` 및 `amount` 유형 값 `address`, `address` 그리고 `uint256`는 `Address` 및 `BigInt`로 변환되고, `Bytes!` 및 `Transfer` 엔티티의 `BigInt!` 속성에 전달됩니다: +and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: ```typescript let id = event.transaction.hash.toHex() @@ -344,9 +344,9 @@ transfer.amount = event.params.amount transfer.save() ``` -#### 이벤트 및 블록/트랜젝션 데이터 +#### Events and Block/Transaction Data -이전의 예시에서 `Transfer` 이벤트에 대해 설명한 바와 같이, 이벤트 핸들로들에게 전달된 이더리움 이벤트들은 이벤트 매개변수에 엑세스를 제공할 뿐만 아니라 상위 트랜잭션과 이벤트 핸들러가 속한 블록에 대한 액세스를 제공합니다. 다음의 데이터는 이벤트 인스턴스(이러한 클래스들은 `graph-ts`의 `ethereum` 모듈의 일부입니다)에서 얻을 수 있습니다: +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): ```typescript class Event { @@ -621,10 +621,10 @@ import { json, JSONValueKind } from '@graphprotocol/graph-ts' JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence - `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: @@ -646,9 +646,9 @@ When the type of a value is certain, it can be converted to a [built-in type](#b - `value.toF64(): f64` - `value.toBigInt(): BigInt` - `value.toString(): string` -- `value.toArray(): Array` - (이후 `JSONValue`를 상기 5개 방법 중 하나로 변환합니다.) +- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) -### 유형 변환 참조 +### Type Conversions Reference | Source(s) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | @@ -688,17 +688,17 @@ When the type of a value is certain, it can be converted to a [built-in type](#b | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### 데이터 소스 메타데이터 +### Data Source Metadata -`dataSource` 네임스페이스를 통해 핸들러를 호출한 데이터 소스의 계약 주소, 네트워크 및 컨텍스트를 검사할 수 있습니다 +You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### 엔티티 및 Entity and DataSourceContext +### Entity and DataSourceContext -기본 `Entity` 클래스 및 child `DataSourceContext`는 필드를 동적으로 설정하고 필드를 가져오는 도우미가 있습니다. +The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From 21fea9ba805a3ce0d1bac7ad0a0085ac64072887 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:45 -0500 Subject: [PATCH 026/241] New translations assemblyscript-api.mdx (Chinese Simplified) --- pages/zh/developer/assemblyscript-api.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/zh/developer/assemblyscript-api.mdx b/pages/zh/developer/assemblyscript-api.mdx index b5066fab02f2..2afa431fe8c5 100644 --- a/pages/zh/developer/assemblyscript-api.mdx +++ b/pages/zh/developer/assemblyscript-api.mdx @@ -621,10 +621,10 @@ import { json, JSONValueKind } from '@graphprotocol/graph-ts' JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence - `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: From d3ebcbae750beaa3ffefef737d7a7f69270b2dde Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:46 -0500 Subject: [PATCH 027/241] New translations assemblyscript-api.mdx (Vietnamese) --- pages/vi/developer/assemblyscript-api.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/vi/developer/assemblyscript-api.mdx b/pages/vi/developer/assemblyscript-api.mdx index b5066fab02f2..2afa431fe8c5 100644 --- a/pages/vi/developer/assemblyscript-api.mdx +++ b/pages/vi/developer/assemblyscript-api.mdx @@ -621,10 +621,10 @@ import { json, JSONValueKind } from '@graphprotocol/graph-ts' JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence - `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: From d09ce0d4735a4291c0fde4297194f36716ab0937 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:47 -0500 Subject: [PATCH 028/241] New translations assemblyscript-migration-guide.mdx (Spanish) --- .../assemblyscript-migration-guide.mdx | 184 +++++++++--------- 1 file changed, 92 insertions(+), 92 deletions(-) diff --git a/pages/es/developer/assemblyscript-migration-guide.mdx b/pages/es/developer/assemblyscript-migration-guide.mdx index acdc2366df9b..2db90a608110 100644 --- a/pages/es/developer/assemblyscript-migration-guide.mdx +++ b/pages/es/developer/assemblyscript-migration-guide.mdx @@ -1,50 +1,50 @@ --- -title: Guia de Migracion de AssemblyScript +title: AssemblyScript Migration Guide --- -Hasta ahora, los subgrafos han utilizado una de las [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finalmente, hemos añadido soporte para la [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉! 🎉 +Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Esto permitirá a los desarrolladores de subgrafos utilizar las nuevas características del lenguaje AS y la libreria estándar. +That will enable subgraph developers to use newer features of the AS language and standard library. -Esta guia es aplicable para cualquiera que use `graph-cli`/`graph-ts` bajo la version `0.22.0`. Si ya estás en una versión superior (o igual) a esa, ya has estado usando la versión `0.19.10` de AssemblyScript 🙂 +This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Nota: A partir de `0.24.0`, `graph-node` puede soportar ambas versiones, dependiendo del `apiVersion` especificado en el manifiesto del subgrafo. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. -## Caracteristicas +## Features -### Nueva Funcionalidad +### New functionality -- `TypedArray`s ahora puede construirse desde `ArrayBuffer`s usando el [nuevo `wrap` metodo estatico](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- Nuevas funciones de la biblioteca estándar: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Se agrego soporte para x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Se agrego `StaticArray`, una mas eficiente variante de array ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- Se agrego `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Se implemento el argumento `radix` en `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) -- Se agrego soporte para los separadores en los literales de punto flotante ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- Se agrego soporte para las funciones de primera clase ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- Se agregaron builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- Se implemento `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -- Se agrego soporte para las plantillas de strings literales ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- Se agrego `encodeURI(Component)` y `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- Se agrego `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- Se agrego `toUTCString` para `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- Se agrego `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) +- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### Optimizaciones +### Optimizations -- `Math` funciones como `exp`, `exp2`, `log`, `log2` y `pow` fueron reemplazadas por variantes mas rapidas ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Optimizar ligeramente `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) -- Caché de más accesos a campos en std Map y Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- Optimizar para potencias de dos en `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### Otros +### Other -- El tipo de un literal de array puede ahora inferirse a partir de su contenido ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Actualizado stdlib a Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -## Como actualizar? +## How to upgrade? -1. Cambiar tus asignaciones `apiVersion` en `subgraph.yaml` a `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: ```yaml ... @@ -56,7 +56,7 @@ dataSources: ... ``` -2. Actualiza la `graph-cli` que usas a la `latest` version ejecutando: +2. Update the `graph-cli` you're using to the `latest` version by running: ```bash # if you have it globally installed @@ -66,20 +66,20 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Haz lo mismo con `graph-ts`, pero en lugar de instalarlo globalmente, guárdalo en tus dependencias principales: +3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: ```bash npm install --save @graphprotocol/graph-ts@latest ``` -4. Sigue el resto de la guía para arreglar los cambios que rompen el idioma. -5. Ejecuta `codegen` y `deploy` nuevamente. +4. Follow the rest of the guide to fix the language breaking changes. +5. Run `codegen` and `deploy` again. -## Rompiendo los esquemas +## Breaking changes -### Anulabilidad +### Nullability -En la versión anterior de AssemblyScript, podías crear un código como el siguiente: +On the older version of AssemblyScript, you could create code like this: ```typescript function load(): Value | null { ... } @@ -88,7 +88,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -Sin embargo, en la versión más reciente, debido a que el valor es anulable, es necesario que lo compruebes, así: +However on the newer version, because the value is nullable, it requires you to check, like this: ```typescript let maybeValue = load() @@ -98,7 +98,7 @@ if (maybeValue) { } ``` -O forzarlo asi: +Or force it like this: ```typescript let maybeValue = load()! // breaks in runtime if value is null @@ -106,11 +106,11 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Si no estás seguro de cuál elegir, te recomendamos que utilices siempre la versión segura. Si el valor no existe, es posible que quieras hacer una declaración if temprana con un retorno en tu handler de subgrafo. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. ### Variable Shadowing -Antes podías hacer [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) y un código como este funcionaría: +Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: ```typescript let a = 10 @@ -118,7 +118,7 @@ let b = 20 let a = a + b ``` -Sin embargo, ahora esto ya no es posible, y el compilador devuelve este error: +However now this isn't possible anymore, and the compiler returns this error: ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -127,9 +127,9 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` -Tendrás que cambiar el nombre de las variables duplicadas si tienes una variable shadowing. -### Comparaciones Nulas -Al hacer la actualización en ut subgrafo, a veces pueden aparecer errores como estos: +You'll need to rename your duplicate variables if you had variable shadowing. +### Null Comparisons +By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -137,7 +137,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` -Para solucionarlo puedes simplemente cambiar la declaracion `if` por algo así: +To solve you can simply change the `if` statement to something like this: ```typescript if (!decimals) { @@ -147,23 +147,23 @@ Para solucionarlo puedes simplemente cambiar la declaracion `if` por algo así: if (decimals === null) { ``` -Lo mismo ocurre si haces != en lugar de ==. +The same applies if you're doing != instead of ==. ### Casting -La forma común de hacer el casting antes era simplemente usar la palabra clave `as`, así: +The common way to do casting before was to just use the `as` keyword, like this: ```typescript let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -Sin embargo, esto sólo funciona en dos casos: +However this only works in two scenarios: -- Casting de primitivas (entre tipos como `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); -- Upcasting en la herencia de clases (subclase → superclase) +- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Upcasting on class inheritance (subclass → superclass) -Ejemplos: +Examples: ```typescript // primitive casting @@ -179,10 +179,10 @@ class Bytes extends Uint8Array {} let bytes = new Bytes(2) < Uint8Array > bytes // same as: bytes as Uint8Array ``` -Hay dos escenarios en los que puede querer cast, pero usando `as`/`var` **no es seguro**: +There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: -- Downcasting en la herencia de clases (superclase → subclase) -- Entre dos tipos que comparten una superclase +- Downcasting on class inheritance (superclass → subclass) +- Between two types that share a superclass ```typescript // downcasting on class inheritance @@ -199,7 +199,7 @@ class ByteArray extends Uint8Array {} let bytes = new Bytes(2) < ByteArray > bytes // breaks in runtime :( ``` -Para esos casos, puedes usar la funcion`changetype`: +For those cases, you can use the `changetype` function: ```typescript // downcasting on class inheritance @@ -218,7 +218,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -Si sólo quieres eliminar la anulabilidad, puedes seguir usando el `as` operador (o `variable`), pero asegúrate de que el valor no puede ser nulo, de lo contrario se romperá. +If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. ```typescript // remove nullability @@ -231,18 +231,18 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -Para el caso de la anulabilidad se recomienda echar un vistazo al [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), hara que tu codigo sea mas limpio 🙂 +For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 -También hemos añadido algunos métodos estáticos más en algunos tipos para facilitar el casting, son: +Also we've added a few more static methods in some types to ease casting, they are: - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### Comprobación de anulabilidad con acceso a la propiedad +### Nullability check with property access -Para usar el [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) puedes usar la declaracion `if` o el operador ternario (`?` and `:`) asi: +To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: ```typescript let something: string | null = 'data' @@ -260,7 +260,7 @@ if (something) { } ``` -Sin embargo eso sólo funciona cuando estás haciendo el `if` / ternario en una variable, no en un acceso a una propiedad, como este: +However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: ```typescript class Container { @@ -273,7 +273,7 @@ container.data = 'data' let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile ``` -Lo que produce este error: +Which outputs this error: ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. @@ -281,7 +281,7 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` -Para solucionar este problema, puedes crear una variable para ese acceso a la propiedad de manera que el compilador pueda hacer la magia de la comprobación de nulidad: +To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript class Container { @@ -296,9 +296,9 @@ let data = container.data let somethingOrElse: string = data ? data : 'else' // compiles just fine :) ``` -### Sobrecarga de operadores con acceso a propiedades +### Operator overloading with property access -Si intentas sumar (por ejemplo) un tipo anulable (desde un acceso a una propiedad) con otro no anulable, el compilador de AssemblyScript en lugar de dar un error en el tiempo de compilación advirtiendo que uno de los valores es anulable, simplemente compila en silencio, dando oportunidad a que el código se rompa en tiempo de ejecución. +If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. ```typescript class BigInt extends Uint8Array { @@ -322,7 +322,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -Hemos abierto un tema en el compilador de AssemblyScript para esto, pero por ahora si haces este tipo de operaciones en tus mapeos de subgrafos, deberías cambiarlos para hacer una comprobación de nulos antes de ello. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -334,9 +334,9 @@ if (!wrapper.n) { wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt ``` -### Inicialización del valor +### Value initialization -Si tienes algún código como este: +If you have any code like this: ```typescript var value: Type // null @@ -344,7 +344,7 @@ value.x = 10 value.y = 'content' ``` -Compilará pero se romperá en tiempo de ejecución, eso ocurre porque el valor no ha sido inicializado, así que asegúrate de que tu subgrafo ha inicializado sus valores, así: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -También si tienes propiedades anulables en una entidad GraphQL, como esta: +Also if you have nullable properties in a GraphQL entity, like this: ```graphql type Total @entity { @@ -361,7 +361,7 @@ type Total @entity { } ``` -Y tienes un código similar a este: +And you have code similar to this: ```typescript let total = Total.load('latest') @@ -373,7 +373,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -Tendrás que asegurarte de inicializar el valor `total.amount`, porque si intentas acceder como en la última línea para la suma, se bloqueará. Así que o bien la inicializas primero: +You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: ```typescript let total = Total.load('latest') @@ -386,7 +386,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -O simplemente puedes cambiar tu esquema GraphQL para no usar un tipo anulable para esta propiedad, entonces la inicializaremos como cero en el paso `codegen` 😉 +Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 ```graphql type Total @entity { @@ -405,9 +405,9 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -### Inicialización de las propiedades de la clase +### Class property initialization -Si exportas alguna clase con propiedades que son otras clases (declaradas por ti o por la libreria estándar) así: +If you export any classes with properties that are other classes (declared by you or by the standard library) like this: ```typescript class Thing {} @@ -417,7 +417,7 @@ export class Something { } ``` -El compilador dará un error porque tienes que añadir un inicializador para las propiedades que son clases, o añadir el operador `!`: +The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: ```typescript export class Something { @@ -441,11 +441,11 @@ export class Something { } ``` -### Esquema GraphQL +### GraphQL schema -Esto no es un cambio directo de AssemblyScript, pero es posible que tengas que actualizar tu archivo `schema.graphql`. +This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. -Ahora ya no puedes definir campos en tus tipos que sean Listas No Anulables. Si tienes un esquema como este: +Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: ```graphql type Something @entity { @@ -458,7 +458,7 @@ type MyEntity @entity { } ``` -Tendrás que añadir un `!` al miembro de la Lista tipo, así: +You'll have to add an `!` to the member of the List type, like this: ```graphql type Something @entity { @@ -471,14 +471,14 @@ type MyEntity @entity { } ``` -Esto ha cambiado debido a las diferencias de anulabilidad entre las versiones de AssemblyScript, y está relacionado con el archivo `src/generated/schema.ts` (ruta por defecto, puede que lo hayas cambiado). +This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). -### Otros +### Other -- Alineado `Map#set` y `Set#add` con el spec, devolviendo `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Las arrays ya no heredan de ArrayBufferView, sino que son distintas ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Las clases inicializadas a partir de literales de objetos ya no pueden definir un constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- El resultado de una operación binaria `**` es ahora el entero denominador común si ambos operandos son enteros. Anteriormente, el resultado era un flotante como si se llamara a `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- Coercionar `NaN` a `false` cuando casting a `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- Al desplazar un valor entero pequeño de tipo `i8`/`u8` o `i16`/`u16`, sólo los 3 o 4 bits menos significativos del valor RHS afectan al resultado, de forma análoga al resultado de un `i32.shl` que sólo se ve afectado por los 5 bits menos significativos del valor RHS. Ejemplo: `someI8 << 8` previamente producia el valor `0`, pero ahora produce `someI8` debido a enmascarar el RHS como `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) -- Corrección de errores en las comparaciones de strings relacionales cuando los tamaños difieren ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) From 78de9ff8bc5cb6aafbd3a8e8541d81f3bd83b7a3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:49 -0500 Subject: [PATCH 029/241] New translations assemblyscript-migration-guide.mdx (Arabic) --- .../assemblyscript-migration-guide.mdx | 158 +++++++++--------- 1 file changed, 79 insertions(+), 79 deletions(-) diff --git a/pages/ar/developer/assemblyscript-migration-guide.mdx b/pages/ar/developer/assemblyscript-migration-guide.mdx index d0eba1f9a31a..2db90a608110 100644 --- a/pages/ar/developer/assemblyscript-migration-guide.mdx +++ b/pages/ar/developer/assemblyscript-migration-guide.mdx @@ -1,50 +1,50 @@ --- -title: دليل ترحيل AssemblyScript +title: AssemblyScript Migration Guide --- -حتى الآن ، كانت ال Subgraphs تستخدم أحد [ الإصدارات الأولى من AssemblyScript ](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). أخيرًا ، أضفنا الدعم لـ [ أحدث دعم متاح ](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -سيمكن ذلك لمطوري ال Subgraph من استخدام مميزات أحدث للغة AS والمكتبة القياسية. +That will enable subgraph developers to use newer features of the AS language and standard library. -ينطبق هذا الدليل على أي شخص يستخدم `graph-cli`/`graph-ts` ادنى من الإصدار `0.22.0`. إذا كنت تستخدم بالفعل إصدارًا أعلى من (أو مساويًا) لذلك ، فأنت بالفعل تستخدم الإصدار ` 0.19.10 ` من AssemblyScript 🙂 +This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> ملاحظة: اعتبارًا من ` 0.24.0 ` ، يمكن أن يدعم ` grapg-node ` كلا الإصدارين ، اعتمادًا على ` apiVersion ` المحدد في Subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. -## مميزات +## Features -### وظائف جديدة +### New functionality - `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- وظائف المكتبة القياسية الجديدة`String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- تمت إضافة دعم لـ x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- تمت إضافة`Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- تم تنفيذ`radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) - Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- دعم إضافي لوظائف الدرجة الأولى ([ v0.14.0 ](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- إضافة البناء: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- تنفيذ `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) - Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- أضف`encodeURI(Component)` و `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- أضف`toString`, `toDateString` و `toTimeString` ل `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- أضف`toUTCString` ل `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- أضف`nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### التحسينات +### Optimizations -- `Math` دوال مثل `exp`, `exp2`, `log`, `log2` and `pow` تم استبدالها بمتغيرات أسرع ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- أكثر تحسينا `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) - Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- قم بتحسين قدرات اثنين في ` ipow32 / 64 ` ([ v0.18.2 ](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### آخر +### Other -- يمكن الآن استنتاج نوع array literal من محتوياتها([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- تم تحديث stdlib إلى Unicode 13.0.0([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -## كيف تقوم بالترقية؟ +## How to upgrade? -1. تغيير ال Mappings الخاص بك ` apiVersion ` في ` subgraph.yaml ` إلى ` 0.0.6 `: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: ```yaml ... @@ -56,7 +56,7 @@ dataSources: ... ``` -2. قم بتحديث ` graph-cli ` الذي تستخدمه إلى ` أحدث إصدار ` عن طريق تشغيل: +2. Update the `graph-cli` you're using to the `latest` version by running: ```bash # if you have it globally installed @@ -66,20 +66,20 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. افعل الشيء نفسه مع ` graph-ts ` ، ولكن بدلاً من التثبيت بشكل عام ، احفظه في dependencies الرئيسية: +3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: ```bash npm install --save @graphprotocol/graph-ts@latest ``` 4. Follow the rest of the guide to fix the language breaking changes. -5. قم بتشغيل ` codegen ` و ` deploy` مرة أخرى. +5. Run `codegen` and `deploy` again. ## Breaking changes ### Nullability -في الإصدار الأقدم من AssemblyScript ، يمكنك إنشاء كود مثل هذا: +On the older version of AssemblyScript, you could create code like this: ```typescript function load(): Value | null { ... } @@ -88,7 +88,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -ولكن في الإصدار الأحدث ، نظرًا لأن القيمة nullable ، فإنها تتطلب منك التحقق ، مثل هذا: +However on the newer version, because the value is nullable, it requires you to check, like this: ```typescript let maybeValue = load() @@ -98,7 +98,7 @@ if (maybeValue) { } ``` -أو إجباره على هذا النحو: +Or force it like this: ```typescript let maybeValue = load()! // breaks in runtime if value is null @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -إذا لم تكن متأكدًا من اختيارك ، فنحن نوصي دائمًا باستخدام الإصدار الآمن. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. ### Variable Shadowing @@ -118,7 +118,7 @@ let b = 20 let a = a + b ``` -لكن هذا لم يعد ممكنًا الآن ، ويعيد المترجم هذا الخطأ: +However now this isn't possible anymore, and the compiler returns this error: ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -127,9 +127,9 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` -ستحتاج إلى إعادة تسمية المتغيرات المكررة إذا كان لديك variable shadowing. -### مقارنات ملغية(Null Comparisons) -من خلال إجراء الترقية على ال Subgraph الخاص بك ، قد تحصل أحيانًا على أخطاء مثل هذه: +You'll need to rename your duplicate variables if you had variable shadowing. +### Null Comparisons +By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -137,7 +137,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` -لحل المشكلة يمكنك ببساطة تغيير عبارة ` if ` إلى شيء مثل هذا: +To solve you can simply change the `if` statement to something like this: ```typescript if (!decimals) { @@ -147,23 +147,23 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i if (decimals === null) { ``` -الأمر نفسه ينطبق إذا كنت تفعل! = بدلاً من ==. +The same applies if you're doing != instead of ==. ### Casting -كانت الطريقة الشائعة لإجراء ال Casting من قبل هي استخدام `as`كلمة رئيسية ، مثل هذا: +The common way to do casting before was to just use the `as` keyword, like this: ```typescript let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -لكن هذا لا يعمل إلا في سيناريوهين: +However this only works in two scenarios: -- Primitive casting (بين انواع مثل`u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); - Upcasting on class inheritance (subclass → superclass) -أمثلة: +Examples: ```typescript // primitive casting @@ -179,10 +179,10 @@ class Bytes extends Uint8Array {} let bytes = new Bytes(2) < Uint8Array > bytes // same as: bytes as Uint8Array ``` -هناك سيناريوهين قد ترغب في ال cast ، ولكن باستخدام`as`/`var` **ليس آمنا**: +There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: - Downcasting on class inheritance (superclass → subclass) -- بين نوعين يشتركان في فئة superclass +- Between two types that share a superclass ```typescript // downcasting on class inheritance @@ -199,7 +199,7 @@ class ByteArray extends Uint8Array {} let bytes = new Bytes(2) < ByteArray > bytes // breaks in runtime :( ``` -في هذه الحالة يمكنك إستخدام`changetype` دالة: +For those cases, you can use the `changetype` function: ```typescript // downcasting on class inheritance @@ -218,7 +218,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -إذا كنت تريد فقط إزالة nullability ، فيمكنك الاستمرار في استخدام ` as ` (أو `variable`) ، ولكن تأكد من أنك تعرف أن القيمة لا يمكن أن تكون خالية ، وإلا فإنه سوف ينكسر. +If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. ```typescript // remove nullability @@ -231,23 +231,23 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -بالنسبة لحالة ال nullability ، نوصي بإلقاء نظرة على [ مميزة التحقق من nullability ](https://www.assemblyscript.org/basics.html#nullability-checks) ، ستجعل الكود أكثر نظافة 🙂 +For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 -أضفنا أيضًا بعض ال static methods في بعض الأنواع وذلك لتسهيل عملية ال Casting ، وهي: +Also we've added a few more static methods in some types to ease casting, they are: - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### التحقق من Nullability مع الوصول الى الخاصية +### Nullability check with property access -لاستخدام [ مميزة التحقق من nullability ](https://www.assemblyscript.org/basics.html#nullability-checks) ، يمكنك استخدام عبارات ` if ` أو عامل التشغيل الثلاثي (`؟ ` and `: `) مثل هذا: +To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: ```typescript let something: string | null = 'data' -let somethingOrElse = something ؟ something : 'else' +let somethingOrElse = something ? something : 'else' // or @@ -260,7 +260,7 @@ if (something) { } ``` -ومع ذلك ، فإن هذا لا يعمل إلا عند تنفيذ ` if ` / ternary على متغير ، وليس على خاصية الوصول ، مثل هذا: +However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: ```typescript class Container { @@ -270,15 +270,15 @@ class Container { let container = new Container() container.data = 'data' -let somethingOrElse: string = container.data ؟ container.data : 'else' // doesn't compile +let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile ``` -الذي يخرج هذا الخطأ: +Which outputs this error: ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. - let somethingOrElse: string = container.data ؟ container.data : "else"; + let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: @@ -293,10 +293,10 @@ container.data = 'data' let data = container.data -let somethingOrElse: string = data ؟ data : 'else' // compiles just fine :) +let somethingOrElse: string = data ? data : 'else' // compiles just fine :) ``` -### التحميل الزائد للمشغل مع الوصول للخاصية +### Operator overloading with property access If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. @@ -322,7 +322,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -لقد فتحنا مشكلة في مترجم AssemblyScript ، ولكن في الوقت الحالي إذا أجريت هذا النوع من العمليات في Subgraph mappings ، فيجب عليك تغييرها لإجراء فحص ل null قبل ذلك. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -334,9 +334,9 @@ if (!wrapper.n) { wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt ``` -### تهيئة القيمة +### Value initialization -إذا كان لديك أي كود مثل هذا: +If you have any code like this: ```typescript var value: Type // null @@ -344,7 +344,7 @@ value.x = 10 value.y = 'content' ``` -سيتم تجميعها لكنها ستتوقف في وقت التشغيل ، وهذا يحدث لأن القيمة لم تتم تهيئتها ، لذا تأكد من أن ال subgraph قد قام بتهيئة قيمها ، على النحو التالي: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -وأيضًا إذا كانت لديك خصائص ل nullable في كيان GraphQL ، مثل هذا: +Also if you have nullable properties in a GraphQL entity, like this: ```graphql type Total @entity { @@ -361,7 +361,7 @@ type Total @entity { } ``` -ولديك كود مشابه لهذا: +And you have code similar to this: ```typescript let total = Total.load('latest') @@ -373,7 +373,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -ستحتاج إلى التأكد من تهيئة`total.amount`القيمة ، لأنه إذا حاولت الوصول كما في السطر الأخير للمجموع ، فسوف يتعطل. لذلك إما أن تقوم بتهيئته أولاً: +You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: ```typescript let total = Total.load('latest') @@ -386,7 +386,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -أو يمكنك فقط تغيير مخطط GraphQL الخاص بك بحيث لا تستخدم نوع nullable لهذه الخاصية ، ثم سنقوم بتهيئته على أنه صفر في الخطوة`codegen`😉 +Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 ```graphql type Total @entity { @@ -405,9 +405,9 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -### تهيئة خاصية الفئة +### Class property initialization -إذا قمت بتصدير أي فئات ذات خصائص فئات أخرى (تم تعريفها بواسطتك أو بواسطة المكتبة القياسية) مثل هذا: +If you export any classes with properties that are other classes (declared by you or by the standard library) like this: ```typescript class Thing {} @@ -417,7 +417,7 @@ export class Something { } ``` -فإن المترجم سيخطئ لأنك ستحتاج إما إضافة مُهيئ للخصائص التي هي فئات ، أو إضافة عامل التشغيل `! `: +The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: ```typescript export class Something { @@ -441,11 +441,11 @@ export class Something { } ``` -### مخطط GraphQL +### GraphQL schema -هذا ليس تغيير مباشرا ل AssemblyScript ، ولكن قد تحتاج إلى تحديث ملف ` schema.graphql ` الخاص بك. +This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. -الآن لم يعد بإمكانك تعريف الحقول في الأنواع الخاصة بك والتي هي قوائم Non-Nullable. إذا كان لديك مخطط مثل هذا: +Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: ```graphql type Something @entity { @@ -458,7 +458,7 @@ type MyEntity @entity { } ``` -سيتعين عليك إضافة `! ` لعضو من نوع القائمة ، مثل هذا: +You'll have to add an `!` to the member of the List type, like this: ```graphql type Something @entity { @@ -471,14 +471,14 @@ type MyEntity @entity { } ``` -هذا التغير بسبب اختلافات ال nullability بين إصدارات AssemblyScript وهو مرتبط بملف`src/generated/schema.ts` (المسار الافتراضي ، ربما تكون قد غيرت هذا). +This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). -### آخر +### Other - Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- لم تعد المصفوفة ترث من ArrayBufferView ، لكنها أصبحت متميزة الآن ([ v0.10.0 ](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) - Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- نتيجة العملية الثنائية ` ** ` هي الآن العدد الصحيح للمقام المشترك إذا كان كلا المعاملين عددًا صحيحًا. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- إجبار`NaN` إلى `false` عندما ال casting إلى`bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) - When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) - Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) From 63122a797958306a303c278502bc8eae9849109e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:50 -0500 Subject: [PATCH 030/241] New translations assemblyscript-migration-guide.mdx (Japanese) --- .../assemblyscript-migration-guide.mdx | 130 +++++++++--------- 1 file changed, 65 insertions(+), 65 deletions(-) diff --git a/pages/ja/developer/assemblyscript-migration-guide.mdx b/pages/ja/developer/assemblyscript-migration-guide.mdx index 951158bf610b..2db90a608110 100644 --- a/pages/ja/developer/assemblyscript-migration-guide.mdx +++ b/pages/ja/developer/assemblyscript-migration-guide.mdx @@ -1,18 +1,18 @@ --- -title: AssemblyScript マイグレーションガイド +title: AssemblyScript Migration Guide --- -これまでサブグラフは、[AssemblyScriptの最初のバージョン](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6)を使用していました。 ついに[最新のバージョン](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10)(v0.19.10) のサポートを追加しました! 🎉 +Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -これにより、サブグラフの開発者は、AS言語と標準ライブラリの新しい機能を使用できるようになります。 +That will enable subgraph developers to use newer features of the AS language and standard library. -このガイドは、バージョン`0.22.0`以下の`graph-cli`/`graph-ts` をお使いの方に適用されます。 もしあなたがすでにそれ以上のバージョンにいるなら、あなたはすでに AssemblyScript のバージョン`0.19.10` を使っています。 +This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> 注:`0.24.0`以降、`graph-node`はサブグラフマニフェストで指定された`apiVersion`に応じて、両方のバージョンをサポートしています。 +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. -## 特徴 +## Features -### 新機能 +### New functionality - `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) - New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) @@ -30,21 +30,21 @@ title: AssemblyScript マイグレーションガイド - Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) - Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### 最適化 +### Optimizations - `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) - Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) - Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### その他 +### Other - The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -## アップグレードの方法 +## How to upgrade? -1. `subgraph.yaml`のマッピングの`apiVersion`を`0.0.6`に変更してください。 +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: ```yaml ... @@ -56,7 +56,7 @@ dataSources: ... ``` -2. 使用している`graph-cli`を`最新版`に更新するには、次のように実行します。 +2. Update the `graph-cli` you're using to the `latest` version by running: ```bash # if you have it globally installed @@ -66,20 +66,20 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. `graph-ts`についても同様ですが、グローバルにインストールするのではなく、メインの依存関係に保存します。 +3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: ```bash npm install --save @graphprotocol/graph-ts@latest ``` -4. ガイドの残りの部分に従って、言語の変更を修正します。 -5. `codegen`を実行し、再度`deploy`します。 +4. Follow the rest of the guide to fix the language breaking changes. +5. Run `codegen` and `deploy` again. -## 変更点 +## Breaking changes ### Nullability -古いバージョンのAssemblyScriptでは、以下のようなコードを作ることができました: +On the older version of AssemblyScript, you could create code like this: ```typescript function load(): Value | null { ... } @@ -88,7 +88,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -しかし、新しいバージョンでは、値がnullableであるため、次のようにチェックする必要があります: +However on the newer version, because the value is nullable, it requires you to check, like this: ```typescript let maybeValue = load() @@ -98,7 +98,7 @@ if (maybeValue) { } ``` -あるいは、次のように強制します: +Or force it like this: ```typescript let maybeValue = load()! // breaks in runtime if value is null @@ -106,11 +106,11 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -どちらを選択すべきか迷った場合は、常に安全なバージョンを使用することをお勧めします。 値が存在しない場合は、サブグラフハンドラの中でreturnを伴う初期のif文を実行するとよいでしょう。 +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. -### 変数シャドウイング +### Variable Shadowing -以前は、[変数のシャドウイング](https://en.wikipedia.org/wiki/Variable_shadowing)を行うことができ、次のようなコードが動作していました。 +Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: ```typescript let a = 10 @@ -118,7 +118,7 @@ let b = 20 let a = a + b ``` -しかし、現在はこれができなくなり、コンパイラは次のようなエラーを返します。 +However now this isn't possible anymore, and the compiler returns this error: ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -127,9 +127,9 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` -変数シャドウイングを行っていた場合は、重複する変数の名前を変更する必要があります。 -### Null比較 -サブグラフのアップグレードを行うと、時々以下のようなエラーが発生することがあります。 +You'll need to rename your duplicate variables if you had variable shadowing. +### Null Comparisons +By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -137,7 +137,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` -解決するには、 `if` 文を以下のように変更するだけです。 +To solve you can simply change the `if` statement to something like this: ```typescript if (!decimals) { @@ -147,23 +147,23 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i if (decimals === null) { ``` -これは、==ではなく!=の場合も同様です。 +The same applies if you're doing != instead of ==. -### キャスト +### Casting -以前の一般的なキャストの方法は、次のように`as`キーワードを使うだけでした。 +The common way to do casting before was to just use the `as` keyword, like this: ```typescript let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -しかし、これは2つのシナリオでしか機能しません。 +However this only works in two scenarios: -- プリミティブなキャスト(between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); -- クラス継承のアップキャスティング(サブクラス→スーパークラス) +- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Upcasting on class inheritance (subclass → superclass) -例 +Examples: ```typescript // primitive casting @@ -179,10 +179,10 @@ class Bytes extends Uint8Array {} let bytes = new Bytes(2) < Uint8Array > bytes // same as: bytes as Uint8Array ``` -キャストしたくても、`as`/`var`を使うと**安全ではない**というシナリオが2つあります。 +There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: -- クラス継承のダウンキャスト(スーパークラス → サブクラス) -- スーパークラスを共有する2つの型の間 +- Downcasting on class inheritance (superclass → subclass) +- Between two types that share a superclass ```typescript // downcasting on class inheritance @@ -199,7 +199,7 @@ class ByteArray extends Uint8Array {} let bytes = new Bytes(2) < ByteArray > bytes // breaks in runtime :( ``` -このような場合には、`changetype`関数を使用します。 +For those cases, you can use the `changetype` function: ```typescript // downcasting on class inheritance @@ -218,7 +218,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -単にnull性を除去したいだけなら、`as` オペレーター(or `variable`)を使い続けることができますが、値がnullではないことを確認しておかないと壊れてしまいます。 +If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. ```typescript // remove nullability @@ -231,18 +231,18 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -Nullabilityについては、[nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks)を利用することをお勧めします。 +For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 -また、キャストを容易にするために、いくつかの型にスタティックメソッドを追加しました。 +Also we've added a few more static methods in some types to ease casting, they are: - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### プロパティアクセスによるNullabilityチェック +### Nullability check with property access -[nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks)を使用するには、次のように`if`文や三項演算子(`?` and `:`) を使用します。 +To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: ```typescript let something: string | null = 'data' @@ -260,7 +260,7 @@ if (something) { } ``` -しかし、これは、以下のように、プロパティのアクセスではなく、変数に対して`if`/ternaryを行っている場合にのみ機能します。 +However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: ```typescript class Container { @@ -273,7 +273,7 @@ container.data = 'data' let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile ``` -すると、このようなエラーが出力されます。 +Which outputs this error: ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. @@ -281,7 +281,7 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` -この問題を解決するには、そのプロパティアクセスのための変数を作成して、コンパイラがnullability checkのマジックを行うようにします。 +To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript class Container { @@ -296,9 +296,9 @@ let data = container.data let somethingOrElse: string = data ? data : 'else' // compiles just fine :) ``` -### プロパティアクセスによるオペレーターオーバーロード +### Operator overloading with property access -アセンブリスクリプトのコンパイラは、値の片方がnullableであることを警告するコンパイル時のエラーを出さずに、ただ黙ってコンパイルするので、実行時にコードが壊れる可能性があります。 +If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. ```typescript class BigInt extends Uint8Array { @@ -322,7 +322,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -この件に関して、アセンブリ・スクリプト・コンパイラーに問題を提起しましたが、 今のところ、もしサブグラフ・マッピングでこの種の操作を行う場合には、 その前にNULLチェックを行うように変更してください。 +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -334,9 +334,9 @@ if (!wrapper.n) { wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt ``` -### 値の初期化 +### Value initialization -もし、このようなコードがあった場合: +If you have any code like this: ```typescript var value: Type // null @@ -344,7 +344,7 @@ value.x = 10 value.y = 'content' ``` -これは、値が初期化されていないために起こります。したがって、次のようにサブグラフが値を初期化していることを確認してください。 +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -また、以下のようにGraphQLのエンティティにNullableなプロパティがある場合も同様です。 +Also if you have nullable properties in a GraphQL entity, like this: ```graphql type Total @entity { @@ -361,7 +361,7 @@ type Total @entity { } ``` -そして、以下のようなコードになります: +And you have code similar to this: ```typescript let total = Total.load('latest') @@ -373,7 +373,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -`total.amount`の値を確実に初期化する必要があります。なぜなら、最後の行のsumのようにアクセスしようとすると、クラッシュしてしまうからです。 そのため、最初に初期化する必要があります。 +You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: ```typescript let total = Total.load('latest') @@ -386,7 +386,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -あるいは、このプロパティに nullable 型を使用しないように GraphQL スキーマを変更することもできます。そうすれば、`コード生成`の段階でゼロとして初期化されます。 +Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 ```graphql type Total @entity { @@ -405,9 +405,9 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -### クラスのプロパティの初期化 +### Class property initialization -以下のように、他のクラス(自分で宣言したものや標準ライブラリで宣言したもの)のプロパティを持つクラスをエクスポートした場合、そのクラスのプロパティを初期化します: +If you export any classes with properties that are other classes (declared by you or by the standard library) like this: ```typescript class Thing {} @@ -417,7 +417,7 @@ export class Something { } ``` -コンパイラがエラーになるのは、クラスであるプロパティにイニシャライザを追加するか、`!` オペレーターを追加する必要があるからです。 +The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: ```typescript export class Something { @@ -441,11 +441,11 @@ export class Something { } ``` -### GraphQLスキーマ +### GraphQL schema -これはAssemblyScriptの直接的な変更ではありませんが、`schema.graphql`ファイルを更新する必要があるかもしれません。 +This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. -タイプの中にNon-Nullable Listのフィールドを定義することができなくなりました。 次のようなスキーマを持っているとします。 +Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: ```graphql type Something @entity { @@ -458,7 +458,7 @@ type MyEntity @entity { } ``` -Listタイプのメンバーには、以下のように`!` を付ける必要があります。 +You'll have to add an `!` to the member of the List type, like this: ```graphql type Something @entity { @@ -471,9 +471,9 @@ type MyEntity @entity { } ``` -これはAssemblyScriptのバージョンによるnullabilityの違いから変更されたもので、`src/generated/schema.ts`ファイル(デフォルトのパス、あなたはこれを変更したかもしれません)に関連しています。 +This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). -### その他 +### Other - Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) From 5e5ce9493f9995e4557ed6f4951786b13280e1a5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:53 -0500 Subject: [PATCH 031/241] New translations distributed-systems.mdx (Arabic) --- pages/ar/developer/distributed-systems.mdx | 50 +++++++++++----------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/pages/ar/developer/distributed-systems.mdx b/pages/ar/developer/distributed-systems.mdx index e647ca602f02..894fcbe2e18b 100644 --- a/pages/ar/developer/distributed-systems.mdx +++ b/pages/ar/developer/distributed-systems.mdx @@ -1,37 +1,37 @@ --- -title: الانظمة الموزعة +title: Distributed Systems --- -The Graph هو بروتوكول يتم تنفيذه كنظام موزع. +The Graph is a protocol implemented as a distributed system. -فشل الاتصالات. وصول الطلبات خارج الترتيب. أجهزة الكمبيوتر المختلفة ذات الساعات والحالات غير المتزامنة تعالج الطلبات ذات الصلة. الخوادم تعيد التشغيل. حدوث عمليات Re-orgs بين الطلبات. هذه المشاكل متأصلة في جميع الأنظمة الموزعة ولكنها تتفاقم في الأنظمة التي تعمل على نطاق عالمي. +Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. -ضع في اعتبارك هذا المثال لما قد يحدث إذا قام العميل بـ polls للمفهرس للحصول على أحدث البيانات أثناء re-org. +Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. -1. المفهرس يستوعب الكتلة 8 -2. تقديم الطلب للعميل للمجموعة 8 -3. يستوعب المفهرس الكتلة 9 -4. المفهرس يستوعب الكتلة 10A -5. تقديم الطلب للعميل للكتلة 10A -6. يكتشف المفهرس reorg لـ 10B ويسترجع 10A -7. تقديم الطلب للعميل للكتلة 9 -8. المفهرس يستوعب الكتلة 10B -9. المفهرس يستوعب الكتلة 11 -10. تقديم الطلب للعميل للكتلة 11 +1. Indexer ingests block 8 +2. Request served to the client for block 8 +3. Indexer ingests block 9 +4. Indexer ingests block 10A +5. Request served to the client for block 10A +6. Indexer detects reorg to 10B and rolls back 10A +7. Request served to the client for block 9 +8. Indexer ingests block 10B +9. Indexer ingests block 11 +10. Request served to the client for block 11 -من وجهة نظر المفهرس ، تسير الأمور إلى الأمام بشكل منطقي. الوقت يمضي قدما ، على الرغم من أننا اضطررنا إلى التراجع عن كتلة الـ uncle وتشغيل الكتلة وفقا للاتفاق. على طول الطريق ، يقدم المفهرس الطلبات باستخدام أحدث حالة يعرفها في ذلك الوقت. +From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. -لكن من وجهة نظر العميل ، تبدو الأمور مشوشة. يلاحظ العميل أن الردود كانت للكتل 8 و 10 و 9 و 11 بهذا الترتيب. نسمي هذا مشكلة "تذبذب الكتلة". عندما يواجه العميل تذبذبا في الكتلة ، فقد تظهر البيانات متناقضة مع نفسها بمرور الوقت. يزداد الموقف سوءا عندما نعتبر أن المفهرسين لا يستوعبون جميع الكتل الأخيرة في وقت واحد ، وقد يتم توجيه طلباتك إلى عدة مفهرسين. +From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. -تقع على عاتق العميل والخادم مسؤولية العمل معا لتوفير بيانات متسقة للمستخدم. يجب استخدام طرق مختلفة اعتمادا على الاتساق المطلوب حيث لا يوجد برنامج واحد مناسب لكل مشكلة. +It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. -الاستنتاج من خلال الآثار المترتبة على الأنظمة الموزعة أمر صعب ، لكن الإصلاح قد لا يكون كذلك! لقد أنشأنا APIs وأنماط لمساعدتك على تصفح بعض حالات الاستخدام الشائعة. توضح الأمثلة التالية هذه الأنماط ولكنها لا تزال تتجاهل التفاصيل التي يتطلبها كود الإنتاج (مثل معالجة الأخطاء والإلغاء) حتى لا يتم تشويش الأفكار الرئيسية. +Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. -## Polling للبيانات المحدثة +## Polling for updated data -The Graph يوفر `block: { number_gte: $minBlock }` API ، والتي تضمن أن تكون الاستجابة لكتلة واحدة تزيد أو تساوي `$minBlock`. إذا تم إجراء الطلب لـ `graph-node` instance ولم تتم مزامنة الكتلة الدنيا بعد ، فسيرجع `graph-node` بخطأ. إذا قام `graph-node` بمزامنة الكتلة الدنيا ، فسيتم تشغيل الاستجابة لأحدث كتلة. إذا تم تقديم الطلب إلى Edge & Node Gateway ، ستقوم الـ Gateway بفلترة المفهرسين الذين لم يقوموا بعد بمزامنة الكتلة الدنيا وتجعل الطلب لأحدث كتلة قام المفهرس بمزامنتها. +The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. -يمكننا استخدام ` number_gte ` لضمان عدم عودة الوقت إلى الوراء عند عمل polling للبيانات في الحلقة. هنا مثال لذلك: +We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: ```javascript /// Updates the protocol.paused variable to the latest @@ -73,11 +73,11 @@ async function updateProtocolPaused() { } ``` -## جلب مجموعة من العناصر المرتبطة +## Fetching a set of related items -حالة أخرى هي جلب مجموعة كبيرة أو بشكل عام جلب العناصر المرتبطة عبر طلبات متعددة. على عكس حالة الـ polling (حيث كان التناسق المطلوب هو المضي قدما في الزمن) ، فإن الاتساق المطلوب هو لنقطة واحدة في الزمن. +Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. -هنا سوف نستخدم الوسيطة `block: { hash: $blockHash }` لتثبيت جميع نتائجنا في نفس الكتلة. +Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. ```javascript /// Gets a list of domain names from a single block using pagination @@ -129,4 +129,4 @@ async function getDomainNames() { } ``` -لاحظ أنه في حالة re-org ، سيحتاج العميل إلى إعادة المحاولة من الطلب الأول لتحديث hash الكتلة إلى كتلة non-uncle. +Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. From 52e7f75b5d4c12d0820b51d006196208439f2f6d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:56 -0500 Subject: [PATCH 032/241] New translations querying-from-your-app.mdx (Spanish) --- pages/es/developer/querying-from-your-app.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/es/developer/querying-from-your-app.mdx b/pages/es/developer/querying-from-your-app.mdx index fb8c7895afaa..c09c44efee72 100644 --- a/pages/es/developer/querying-from-your-app.mdx +++ b/pages/es/developer/querying-from-your-app.mdx @@ -1,10 +1,10 @@ --- -title: Consultar desde una Aplicacion +title: Querying from an Application --- -Una vez que un subgrafo es desplegado en Subgraph Studio o en The Graph Explorer, se te dará el endpoint para tu API GraphQL que debería ser algo así: +Once a subgraph is deployed to the Subgraph Studio or to the Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: -**Subgraph Studio (endpoint de prueba)** +**Subgraph Studio (testing endpoint)** ```sh Queries (HTTP) @@ -18,23 +18,23 @@ Queries (HTTP) https://gateway.thegraph.com/api//subgraphs/id/ ``` -Usando el endpoint de GraphQL, puedes usar varias librerías de Clientes de GraphQL para consultar el subgrafo y rellenar tu aplicación con los datos indexados por el subgrafo. +Using the GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with the data indexed by the subgraph. -A continuación se presentan un par de clientes GraphQL más populares en el ecosistema y cómo utilizarlos: +Here are a couple of the more popular GraphQL clients in the ecosystem and how to use them: -### Cliente Apollo +### Apollo client -[Apollo client](https://www.apollographql.com/docs/) admite proyectos web que incluyen frameworks como React y Vue, así como clientes móviles como iOS, Android y React Native. +[Apollo client](https://www.apollographql.com/docs/) supports web projects including frameworks like React and Vue, as well as mobile clients like iOS, Android, and React Native. -Veamos cómo obtener datos de un subgrafo con el cliente Apollo en un proyecto web. +Let's look at how fetch data from a subgraph with Apollo client in a web project. -Primero, instala `@apollo/client` y `graphql`: +First, install `@apollo/client` and `graphql`: ```sh npm install @apollo/client graphql ``` -A continuación, puedes consultar la API con el siguiente código: +Then you can query the API with the following code: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -67,7 +67,7 @@ client }) ``` -Para utilizar variables, puedes pasar un argumento `variables` a la consulta: +To use variables, you can pass in a `variables` argument to the query: ```javascript const tokensQuery = ` @@ -100,17 +100,17 @@ client ### URQL -Otra opción es [URQL](https://formidable.com/open-source/urql/), una libreria cliente de GraphQL algo más ligera. +Another option is [URQL](https://formidable.com/open-source/urql/), a somewhat lighter weight GraphQL client library. -Veamos cómo obtener datos de un subgrafo con URQL en un proyecto web. +Let's look at how fetch data from a subgraph with URQL in a web project. -Primero, instala `urql` and `graphql`: +First, install `urql` and `graphql`: ```sh npm install urql graphql ``` -A continuación, puedes consultar la API con el siguiente código: +Then you can query the API with the following code: ```javascript import { createClient } from 'urql' From fe94fb91e7892dd4a6f15a6c702ad87a39e34393 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:57 -0500 Subject: [PATCH 033/241] New translations querying-from-your-app.mdx (Arabic) --- pages/ar/developer/querying-from-your-app.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/ar/developer/querying-from-your-app.mdx b/pages/ar/developer/querying-from-your-app.mdx index f3decc0d1768..c09c44efee72 100644 --- a/pages/ar/developer/querying-from-your-app.mdx +++ b/pages/ar/developer/querying-from-your-app.mdx @@ -1,40 +1,40 @@ --- -title: الاستعلام من التطبيق +title: Querying from an Application --- -بمجرد نشر ال Subgraph في Subgraph Studio أو في Graph Explorer ، سيتم إعطاؤك endpoint ل GraphQL API الخاصة بك والتي يجب أن تبدو كما يلي: +Once a subgraph is deployed to the Subgraph Studio or to the Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: -**Subgraph Studio (اختبار endpoint)** +**Subgraph Studio (testing endpoint)** ```sh -استعلامات (HTTP) +Queries (HTTP) https://api.studio.thegraph.com/query/// ``` **Graph Explorer** ```sh -استعلامات (HTTP) +Queries (HTTP) https://gateway.thegraph.com/api//subgraphs/id/ ``` -باستخدام GraphQL endpoint ، يمكنك استخدام العديد من مكتبات GraphQL Client للاستعلام عن ال Subgraph وملء تطبيقك بالبيانات المفهرسة بواسطة ال Subgraph. +Using the GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with the data indexed by the subgraph. Here are a couple of the more popular GraphQL clients in the ecosystem and how to use them: ### Apollo client -[Apoolo client ](https://www.apollographql.com/docs/)يدعم مشاريع الويب بما في ال framework مثل React و Vue ، بالإضافة إلى mobile clients مثل iOS و Android و React Native. +[Apollo client](https://www.apollographql.com/docs/) supports web projects including frameworks like React and Vue, as well as mobile clients like iOS, Android, and React Native. -لنلقِ نظرة على كيفية جلب البيانات من Subgraph وذلك باستخدام Apollo client في مشروع ويب. +Let's look at how fetch data from a subgraph with Apollo client in a web project. -اولا قم بتثبيت `@apollo/client` and `graphql`: +First, install `@apollo/client` and `graphql`: ```sh npm install @apollo/client graphql ``` -بعد ذلك يمكنك الاستعلام عن API بالكود التالي: +Then you can query the API with the following code: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -67,7 +67,7 @@ client }) ``` -لاستخدام المتغيرات، يمكنك التمرير في`variables`ل argument الاستعلام: +To use variables, you can pass in a `variables` argument to the query: ```javascript const tokensQuery = ` @@ -100,17 +100,17 @@ client ### URQL -هناك خيار آخر وهو [ URQL ](https://formidable.com/open-source/urql/) ، وهي مكتبة GraphQL client أخف وزنا إلى حد ما. +Another option is [URQL](https://formidable.com/open-source/urql/), a somewhat lighter weight GraphQL client library. -لنلقِ نظرة على كيفية جلب البيانات من Subgraph باستخدام URQL في مشروع ويب. +Let's look at how fetch data from a subgraph with URQL in a web project. -اولا قم بتثبيت `urql` و `graphql`: +First, install `urql` and `graphql`: ```sh npm install urql graphql ``` -بعد ذلك يمكنك الاستعلام عن API بالكود التالي: +Then you can query the API with the following code: ```javascript import { createClient } from 'urql' From 8134f9c72000a909114882efdf5c84b8a1b4d488 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:58 -0500 Subject: [PATCH 034/241] New translations querying-from-your-app.mdx (Japanese) --- pages/ja/developer/querying-from-your-app.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/ja/developer/querying-from-your-app.mdx b/pages/ja/developer/querying-from-your-app.mdx index e94a6f50046e..9038d2ee3790 100644 --- a/pages/ja/developer/querying-from-your-app.mdx +++ b/pages/ja/developer/querying-from-your-app.mdx @@ -1,10 +1,10 @@ --- -title: アプリケーションからのクエリ +title: Querying from an Application --- -サブグラフがSubgraph StudioまたはGraph Explorerにデプロイされると、GraphQL APIのエンドポイントが与えられ、以下のような形になります。 +Once a subgraph is deployed to the Subgraph Studio or to the Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: -**Subgraph Studio (テスト用エンドポイント)** +**Subgraph Studio (testing endpoint)** ```sh Queries (HTTP) @@ -18,23 +18,23 @@ Queries (HTTP) https://gateway.thegraph.com/api//subgraphs/id/ ``` -GraphQLエンドポイントを使用すると、さまざまなGraphQLクライアントライブラリを使用してサブグラフをクエリし、サブグラフによってインデックス化されたデータをアプリに入力することができます。 +Using the GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with the data indexed by the subgraph. -ここでは、エコシステムで人気のあるGraphQLクライアントをいくつか紹介し、その使い方を説明します: +Here are a couple of the more popular GraphQL clients in the ecosystem and how to use them: -### Apolloクライアント +### Apollo client -[Apolloクライアント](https://www.apollographql.com/docs/)は、ReactやVueなどのフレームワークを含むWebプロジェクトや、iOS、Android、React Nativeなどのモバイルクライアントをサポートしています。 +[Apollo client](https://www.apollographql.com/docs/) supports web projects including frameworks like React and Vue, as well as mobile clients like iOS, Android, and React Native. -WebプロジェクトでApolloクライアントを使ってサブグラフからデータを取得する方法を見てみましょう。 +Let's look at how fetch data from a subgraph with Apollo client in a web project. -まず、`@apollo/client`と`graphql`をインストールします: +First, install `@apollo/client` and `graphql`: ```sh npm install @apollo/client graphql ``` -その後、以下のコードでAPIをクエリできます: +Then you can query the API with the following code: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -67,7 +67,7 @@ client }) ``` -変数を使うには、クエリの引数に`variables` を渡します。 +To use variables, you can pass in a `variables` argument to the query: ```javascript const tokensQuery = ` @@ -100,17 +100,17 @@ client ### URQL -もう一つの選択肢は[URQL](https://formidable.com/open-source/urql/)で、URQLは、やや軽量なGraphQLクライアントライブラリです。 +Another option is [URQL](https://formidable.com/open-source/urql/), a somewhat lighter weight GraphQL client library. -URQLは、やや軽量なGraphQLクライアントライブラリです。 +Let's look at how fetch data from a subgraph with URQL in a web project. -WebプロジェクトでURQLを使ってサブグラフからデータを取得する方法を見てみましょう。 まず、`urql`と`graphql`をインストールします。 +First, install `urql` and `graphql`: ```sh npm install urql graphql ``` -その後、以下のコードでAPIをクエリできます: +Then you can query the API with the following code: ```javascript import { createClient } from 'urql' From d3dfab999e393e5cf105df83b1687ce642a442d0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:54:59 -0500 Subject: [PATCH 035/241] New translations querying-from-your-app.mdx (Chinese Simplified) --- pages/zh/developer/querying-from-your-app.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/developer/querying-from-your-app.mdx b/pages/zh/developer/querying-from-your-app.mdx index c09c44efee72..949e915c8bfd 100644 --- a/pages/zh/developer/querying-from-your-app.mdx +++ b/pages/zh/developer/querying-from-your-app.mdx @@ -11,7 +11,7 @@ Queries (HTTP) https://api.studio.thegraph.com/query/// ``` -**Graph Explorer** +**Graph 浏览器** ```sh Queries (HTTP) From 24da222eca7d8952a25878a257b147b4da46ce1a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:02 -0500 Subject: [PATCH 036/241] New translations quick-start.mdx (Spanish) --- pages/es/developer/quick-start.mdx | 98 +++++++++++++++--------------- 1 file changed, 49 insertions(+), 49 deletions(-) diff --git a/pages/es/developer/quick-start.mdx b/pages/es/developer/quick-start.mdx index a75c04fadbd1..6893d424ddc2 100644 --- a/pages/es/developer/quick-start.mdx +++ b/pages/es/developer/quick-start.mdx @@ -1,17 +1,17 @@ --- -title: Comienzo Rapido +title: Quick Start --- -Esta guía te llevará rápidamente a través de cómo inicializar, crear y desplegar tu subgrafo en: +This guide will quickly take you through how to initialize, create, and deploy your subgraph on: -- **Subgraph Studio** - usado solo para subgrafos que indexan en **Ethereum mainnet** -- **Hosted Service** - usado para subgrafos que indexan **otras redes** fuera de Ethereum mainnet (e.g. Binance, Matic, etc) +- **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** +- **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) ## Subgraph Studio -### 1. Instala The Graph CLI +### 1. Install the Graph CLI -The Graph CLI esta escrito en JavaScript y necesitaras tener `npm` o `yarn` instalado para usarlo. +The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. ```sh # NPM @@ -21,51 +21,51 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Inicializa tu Subgrafo +### 2. Initialize your Subgraph -- Inicializa tu subgrafo a partir de un contrato existente. +- Initialize your subgraph from an existing contract. ```sh graph init --studio ``` -- El slug de tu subgrafo es un identificador para tu subgrafo. La herramienta CLI te guiará a través de los pasos para crear un subgrafo, como la address del contrato, la red, etc., como puedes ver en la captura de pantalla siguiente. +- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. -![Comando de Subgrafo](/img/Subgraph-Slug.png) +![Subgraph command](/img/Subgraph-Slug.png) -### 3. Escribe tu Subgrafo +### 3. Write your Subgraph -Los comandos anteriores crean un subgrafo de andamio que puedes utilizar como punto de partida para construir tu subgrafo. Al realizar cambios en el subgrafo, trabajarás principalmente con tres archivos: +The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: -- Manifest (subgraph.yaml) - El manifiesto define qué fuentes de datos indexarán tus subgrafos. -- Schema (schema.graphql) - El esquema GraphQL define los datos que deseas recuperar del subgrafo. -- AssemblyScript Mappings (mapping.ts) - Este es el código que traduce los datos de tus fuentes de datos a las entidades definidas en el esquema. +- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. -Para más información sobre cómo escribir tu subgrafo, mira [Create a Subgraph](/developer/create-subgraph-hosted). +For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. Despliega en Subgraph Studio +### 4. Deploy to the Subgraph Studio -- Ve a Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) y conecta tu wallet. -- Haz clic en "Crear" e introduce el subgrafo que utilizaste en el paso 2. -- Ejecuta estos comandos en la carpeta subgrafo +- Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. +- Click "Create" and enter the subgraph slug you used in step 2. +- Run these commands in the subgraph folder ```sh $ graph codegen $ graph build ``` -- Autentica y despliega tu subgrafo. La clave para desplegar se puede encontrar en la página de Subgraph en Subgraph Studio. +- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. ```sh $ graph auth --studio $ graph deploy --studio ``` -- Se te pedirá una etiqueta de versión. Se recomienda encarecidamente utilizar las siguientes convenciones para nombrar tus versiones. Ejemplo: `0.0.1`, `v1`, `version1` +- You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` -### 5. Comprueba tus registros +### 5. Check your logs -Los registros deberían indicarte si hay algún error. Si tu subgrafo está fallando, puedes consultar la fortaleza del subgrafo utilizando la función [GraphiQL Playground](https://graphiql-online.com/). Usa [este endpoint](https://api.thegraph.com/index-node/graphql). Ten en cuenta que puedes aprovechar la consulta de abajo e introducir tu ID de despliegue para tu subgrafo. En este caso, `Qm...` es el ID de despliegue (que puede ser obtenido en la pagina de the Subgraph debado de **Details**). La siguiente consulta te dirá cuándo falla un subgrafo para que puedas depurar en consecuencia: +The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: ```sh { @@ -109,15 +109,15 @@ Los registros deberían indicarte si hay algún error. Si tu subgrafo está fall } ``` -### 6. Consulta tu Subgrafo +### 6. Query your Subgraph -Ahora puedes consultar tu subgrafo siguiendo [estas instrucciones](/developer/query-the-graph). Puedes consultar desde tu dapp si no tienes tu clave de API a través de la URL de consulta temporal, libre y de tarifa limitada, que puede utilizarse para el desarrollo y la puesta en marcha. Puedes leer las instrucciones adicionales sobre cómo consultar un subgrafo desde una aplicación frontend [aquí](/developer/querying-from-your-app). +You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). -## Servicio Alojado +## Hosted Service -### 1. Instala The Graph CLI +### 1. Install the Graph CLI -"The Graph CLI es un paquete npm y necesitarás `npm` o `yarn` instalado para usarlo. +"The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. ```sh # NPM @@ -127,39 +127,39 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Inicializa tu Subgrafo +### 2. Initialize your Subgraph -- Inicializa tu subgrafo a partir de un contrato existente. +- Initialize your subgraph from an existing contract. ```sh $ graph init --product hosted-service --from-contract
``` -- Se te pedirá un nombre de subgrafo. El formato es `/`. Ex: `graphprotocol/examplesubgraph` +- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` -- Si quieres inicializar desde un ejemplo, ejecuta el siguiente comando: +- If you'd like to initialize from an example, run the command below: ```sh $ graph init --product hosted-service --from-example ``` -- En el caso del ejemplo, el subgrafo se basa en el contrato Gravity de Dani Grant que gestiona los avatares de los usuarios y emite `NewGravatar` o `UpdateGravatar` cada vez que se crean o actualizan los avatares. +- In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -### 3. Escribe tu Subgrafo +### 3. Write your Subgraph -El comando anterior habrá creado un andamio a partir del cual puedes construir tu subgrafo. Al realizar cambios en el subgrafo, trabajarás principalmente con tres archivos: +The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: -- Manifest (subgraph.yaml) - El manifiesto define qué fuentes de datos indexará tu subgrafo -- Schema (schema.graphql) - El esquema GraphQL define los datos que deseas recuperar del subgrafo -- AssemblyScript Mappings (mapping.ts) - Este es el código que traduce los datos de tus fuentes de datos a las entidades definidas en el esquema +- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index +- Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph +- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema -Para más información sobre cómo escribir tu subgrafo, mira [Create a Subgraph](/developer/create-subgraph-hosted). +For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. Despliega tu Subgrafo +### 4. Deploy your Subgraph -- Firma en el [Hosted Service](https://thegraph.com/hosted-service/) usando tu cuenta github -- Haz clic en Add Subgraph y rellena la información requerida. Utiliza el mismo nombre de subgrafo que en el paso 2. -- Ejecuta codegen en la carpeta del subgrafo +- Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account +- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. +- Run codegen in the subgraph folder ```sh # NPM @@ -169,16 +169,16 @@ $ npm run codegen $ yarn codegen ``` -- Agrega tu token de acceso y despliega tu subgrafo. El token de acceso se encuentra en tu panel de control en el Servicio Alojado (Hosted Service). +- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. ```sh $ graph auth --product hosted-service $ graph deploy --product hosted-service / ``` -### 5. Comprueba tus registros +### 5. Check your logs -Los registros deberían indicarte si hay algún error. Si tu subgrafo está fallando, puedes consultar la fortaleza del subgrafo utilizando la función [GraphiQL Playground](https://graphiql-online.com/). Usa [este endpoint](https://api.thegraph.com/index-node/graphql). Ten en cuenta que puedes aprovechar la consulta de abajo e introducir tu ID de despliegue para tu subgrafo. En este caso, `Qm...` es el ID de despliegue (que puede ser obtenido en la pagina de the Subgraph debado de **Details**). La siguiente consulta te dirá cuándo falla un subgrafo para que puedas depurar en consecuencia: +The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: ```sh { @@ -222,6 +222,6 @@ Los registros deberían indicarte si hay algún error. Si tu subgrafo está fall } ``` -### 6. Consulta tu Subgrafo +### 6. Query your Subgraph -Sigue [estas instrucciones](/hosted-service/query-hosted-service) para consultar tu subgrafo en el Servicio Alojado (Hosted Service). +Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. From 31b8bab23063aad98d19fe66b7cac9e835daa5e6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:03 -0500 Subject: [PATCH 037/241] New translations quick-start.mdx (Arabic) --- pages/ar/developer/quick-start.mdx | 88 +++++++++++++++--------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/pages/ar/developer/quick-start.mdx b/pages/ar/developer/quick-start.mdx index 5a245d65141a..d66ecb5b38b6 100644 --- a/pages/ar/developer/quick-start.mdx +++ b/pages/ar/developer/quick-start.mdx @@ -1,17 +1,17 @@ --- -title: بداية سريعة +title: Quick Start --- -سيأخذك هذا الدليل سريعا ويعلمك كيفية تهيئة وإنشاء ونشر Subgraph الخاص بك على: +This guide will quickly take you through how to initialize, create, and deploy your subgraph on: - **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** -- **Hosted Service** - يتم استخدامها ل Subgraphs التي تفهرس ** الشبكات الأخرى ** خارج Ethereum mainnet (مثل Binance و Matic والخ..) +- **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) ## Subgraph Studio -### 1. قم بتثبيت Graph CLI +### 1. Install the Graph CLI -تمت كتابة Graph CLI بلغة JavaScript وستحتاج إلى تثبيت إما `npm` أو `yarn` لاستخدامه. +The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. ```sh # NPM @@ -21,51 +21,51 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. قم بتهيئة Subgraph الخاص بك +### 2. Initialize your Subgraph -- ابدأ ال Subgraph الخاص بك من عقد موجود. +- Initialize your subgraph from an existing contract. ```sh graph init --studio ``` -- مؤشر ال Subgraph الخاص بك هو معرف ل Subgraph الخاص بك. ستوجهك أداة CLI لخطوات إنشاء Subgraph ، مثل عنوان العقد والشبكة الخ.. كما ترى في لقطة الشاشة أدناه. +- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. -![أمر Subgraph](/img/Subgraph-Slug.png) +![Subgraph command](/img/Subgraph-Slug.png) -### 3. اكتب subgraph الخاص بك +### 3. Write your Subgraph -تقوم الأوامر السابقة بإنشاء ركيزة ال Subgraph والتي يمكنك استخدامها كنقطة بداية لبناء subgraph الخاص بك. عند إجراء تغييرات على ال subgraph ، ستعمل بشكل أساسي على ثلاثة ملفات: +The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: -- : (Manifest(subgraph.yaml يحدد ال manifest مصادر البيانات التي سيقوم Subgraphs الخاص بك بفهرستها. -- مخطط (schema.graphql) - يحدد مخطط GraphQL البيانات التي ترغب في استردادها من Subgraph. -- (AssemblyScript Mappings (mapping.ts هذا هو الكود الذي يترجم البيانات من مصادر البيانات الخاصة بك إلى الكيانات المحددة في المخطط. +- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. -لمزيد من المعلومات حول كيفية كتابة Subgraph ، راجع [ إنشاء Subgraph ](/developer/create-subgraph-hosted). +For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). ### 4. Deploy to the Subgraph Studio -- انتقل إلى Subgraph Studio [ https://thegraph.com/studio/ ](https://thegraph.com/studio/) وقم بتوصيل محفظتك. +- Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. - Click "Create" and enter the subgraph slug you used in step 2. -- قم بتشغيل هذه الأوامر في مجلد Subgraph +- Run these commands in the subgraph folder ```sh $ graph codegen $ graph build ``` -- وثق وأنشر ال Subgraph الخاص بك. يمكن العثور على مفتاح النشر في صفحة Subgraph في Subgraph Studio. +- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. ```sh $ graph auth --studio $ graph deploy --studio ``` -- سيتم طلب منك تسمية الإصدار. يوصى بشدة باستخدام المصطلحات التالية لتسمية الإصدارات الخاصة بك. مثال: `0.0.1` ، `v1` ، `version1` +- You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` -### 5. تحقق من السجلات الخاصة بك +### 5. Check your logs -السجلات ستخبرك في حالة وجود أخطاء. في حالة فشل Subgraph ، يمكنك الاستعلام عن صحة Subgraph وذلك باستخدام [ GraphiQL Playground ](https://graphiql-online.com/). استخدم [ لهذا ال endpoint ](https://api.thegraph.com/index-node/graphql). لاحظ أنه يمكنك الاستفادة من الاستعلام أدناه وإدخال ID النشر ل Subgraph الخاص بك. في هذه الحالة ، `Qm...` هو ID النشر (والذي يمكن أن يوجد في صفحة ال Subgraph ضمن ** التفاصيل **). سيخبرك الاستعلام أدناه عند فشل Subgraph حتى تتمكن من تصحيح الأخطاء وفقًا لذلك: +The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: ```sh { @@ -109,15 +109,15 @@ $ graph deploy --studio } ``` -### 6. الاستعلام عن ال Subgraph الخاص بك +### 6. Query your Subgraph -يمكنك الآن الاستعلام عن Subgraph باتباع [ هذه الإرشادات ](/developer/query-the-graph). يمكنك الاستعلام من ال dapp الخاص بك إذا لم يكن لديك API Key الخاص بك وذلك عبر عنوان URL الخاص بالاستعلام المؤقت المجاني والمحدود والذي يمكن استخدامه للتطوير والتشغيل. يمكنك قراءة الإرشادات الإضافية حول كيفية الاستعلام عن رسم بياني فرعي من [ هنا ](/developer/querying-from-your-app). +You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). ## الخدمة المستضافة -### 1. قم بتثبيت Graph CLI +### 1. Install the Graph CLI -"Graph CLI عبارة عن حزمة npm وستحتاج إلى تثبيت `npm` أو `yarn` لاستخدامها. +"The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. ```sh # NPM @@ -127,15 +127,15 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. قم بتهيئة Subgraph الخاص بك +### 2. Initialize your Subgraph -- ابدأ ال Subgraph الخاص بك من عقد موجود. +- Initialize your subgraph from an existing contract. ```sh $ graph init --product hosted-service --from-contract
``` -- سيُطلب منك اسم Subgraph. التنسيق هو `/`. مثال: `graphprotocol/examplesubgraph` +- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` - If you'd like to initialize from an example, run the command below: @@ -143,23 +143,23 @@ $ graph init --product hosted-service --from-contract
$ graph init --product hosted-service --from-example ``` -- في حالة المثال ، يعتمد Subgraph على عقد Gravity بواسطة Dani Grant الذي يدير ال avatars للمستخدم ويصدر أحداث `NewGravatar` أو `UpdateGravatar` كلما تم إنشاء ال avatars أو تحديثها. +- In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -### 3. اكتب subgraph الخاص بك +### 3. Write your Subgraph -سيكون الأمر السابق قد أنشأ ركيزة حيث يمكنك Subgraph الخاص بك. عند إجراء تغييرات على ال subgraph ، ستعمل بشكل أساسي على ثلاثة ملفات: +The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: -- : (Manifest(subgraph.yaml يحدد ال manifest مصادر البيانات التي سيفهرسها ال Subgraph -- مخطط (schema.graphql) - يحدد مخطط GraphQL البيانات التي ترغب في جلبها من Subgraph -- (AssemblyScript Mappings (mapping.ts هذا هو الكود الذي يترجم البيانات من مصادر البيانات الخاصة بك إلى الكيانات المحددة في المخطط +- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index +- Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph +- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema -لمزيد من المعلومات حول كيفية كتابة Subgraph ، راجع [ إنشاء Subgraph ](/developer/create-subgraph-hosted). +For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. انشر ال subgraph الخاص بك +### 4. Deploy your Subgraph -- سجّل الدخول إلى [ الخدمة المستضافة ](https://thegraph.com/hosted-service/) باستخدام حسابك على github -- انقر فوق إضافة Subgraph واملأ المعلومات المطلوبة. استخدم نفس اسم ال Subgraph كما في الخطوة 2. -- قم بتشغيل codegen في مجلد ال Subgraph +- Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account +- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. +- Run codegen in the subgraph folder ```sh # NPM @@ -169,16 +169,16 @@ $ npm run codegen $ yarn codegen ``` -- أضف توكن الوصول الخاص بك وانشر ال Subgraph الخاص بك. يتم العثور على توكن الوصول في لوحة التحكم في ال Hosted service. +- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. ```sh $ graph auth --product hosted-service $ graph deploy --product hosted-service / ``` -### 5. تحقق من السجلات الخاصة بك +### 5. Check your logs -السجلات ستخبرك في حالة وجود أخطاء. في حالة فشل Subgraph ، يمكنك الاستعلام عن صحة Subgraph وذلك باستخدام [ GraphiQL Playground ](https://graphiql-online.com/). استخدم [ لهذا ال endpoint ](https://api.thegraph.com/index-node/graphql). لاحظ أنه يمكنك الاستفادة من الاستعلام أدناه وإدخال ID النشر ل Subgraph الخاص بك. في هذه الحالة ، `Qm...` هو ID النشر (والذي يمكن أن يوجد في صفحة ال Subgraph ضمن ** التفاصيل **). سيخبرك الاستعلام أدناه عند فشل Subgraph حتى تتمكن من تصحيح الأخطاء وفقًا لذلك: +The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: ```sh { @@ -222,6 +222,6 @@ $ graph deploy --product hosted-service / } ``` -### 6. الاستعلام عن ال Subgraph الخاص بك +### 6. Query your Subgraph -اتبع [ هذه الإرشادات ](/hosted-service/query-hosted-service) للاستعلام عن ال Subgraph الخاص بك على ال Hosted service. +Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. From a219b747f83a6ef6ce5d5ef0dc2b4435a81634cb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:04 -0500 Subject: [PATCH 038/241] New translations quick-start.mdx (Japanese) --- pages/ja/developer/quick-start.mdx | 96 +++++++++++++++--------------- 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/pages/ja/developer/quick-start.mdx b/pages/ja/developer/quick-start.mdx index 023f229a1f39..6893d424ddc2 100644 --- a/pages/ja/developer/quick-start.mdx +++ b/pages/ja/developer/quick-start.mdx @@ -1,17 +1,17 @@ --- -title: クイックスタート +title: Quick Start --- -このガイドでは、サブグラフの初期化、作成、デプロイの方法を素早く説明します: +This guide will quickly take you through how to initialize, create, and deploy your subgraph on: -- **Subgraph Studio** - **Ethereum mainnet**をインデックスするサブグラフにのみ使用されます。 -- **Hosted Service** - Ethereumメインネット以外の **他のネットワーク**(Binance、Maticなど)にインデックスを付けるサブグラフに使用されます。 +- **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** +- **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) ## Subgraph Studio -### 1. Graph CLIのインストール +### 1. Install the Graph CLI -Graph CLIはJavaScriptで書かれており、使用するには `npm` または `yarn` のいずれかをインストールする必要があります。 +The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. ```sh # NPM @@ -21,51 +21,51 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. サブグラフの初期化 +### 2. Initialize your Subgraph -- 既存のコントラクトからサブグラフを初期化します。 +- Initialize your subgraph from an existing contract. ```sh graph init --studio ``` -- サブグラフのスラッグは、サブグラフの識別子です。 CLIツールでは、以下のスクリーンショットに見られるように、コントラクトアドレス、ネットワークなど、サブグラフを作成するための手順を説明します。 +- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. ![Subgraph command](/img/Subgraph-Slug.png) -### 3. サブグラフの作成 +### 3. Write your Subgraph -前述のコマンドでは、サブグラフを作成するための出発点として使用できるscaffoldサブグラフを作成します。 サブグラフに変更を加える際には、主に3つのファイルを使用します: +The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: -- マニフェスト (subgraph.yaml) - マニフェストは、サブグラフがインデックスするデータソースを定義します。 -- スキーマ (schema.graphql) - GraphQLスキーマは、サブグラフから取得したいデータを定義します。 -- AssemblyScript Mappings (mapping.ts) - データソースからのデータを、スキーマで定義されたエンティティに変換するコードです。 +- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. -サブグラフの書き方の詳細については、 [Create a Subgraph](/developer/create-subgraph-hosted) を参照してください。 +For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. Subgraph Studioへのデプロイ +### 4. Deploy to the Subgraph Studio -- [https://thegraph.com/studio/](https://thegraph.com/studio/) にアクセスし、ウォレットを接続します。 -- 「Create」をクリックし、ステップ2で使用したサブグラフのスラッグを入力します。 -- サブグラフのフォルダで以下のコマンドを実行します。 +- Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. +- Click "Create" and enter the subgraph slug you used in step 2. +- Run these commands in the subgraph folder ```sh $ graph codegen $ graph build ``` -- サブグラフの認証とデプロイを行います。 デプロイキーは、Subgraph StudioのSubgraphページにあります。 +- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. ```sh $ graph auth --studio $ graph deploy --studio ``` -- バージョンラベルの入力を求められます。 バージョンラベルの命名には、以下のような規約を使用することを強くお勧めします。 例: `0.0.1`, `v1`, `version1` +- You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` -### 5. ログの確認 +### 5. Check your logs -エラーが発生した場合は、ログを確認してください。 サブグラフが失敗している場合は、 [GraphiQL Playground](https://graphiql-online.com/) を使ってサブグラフの健全性をクエリすることができます。 [このエンドポイント](https://api.thegraph.com/index-node/graphql) を使用します。 なお、以下のクエリを活用して、サブグラフのデプロイメントIDを入力することができます。 この場合、 `Qm...` がデプロイメントIDです(これはSubgraphページの**Details**に記載されています)。 以下のクエリは、サブグラフが失敗したときに教えてくれるので、適宜デバッグすることができます: +The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: ```sh { @@ -109,15 +109,15 @@ $ graph deploy --studio } ``` -### 6. サブグラフのクエリ +### 6. Query your Subgraph -[以下の手順](/developer/query-the-graph)でサブグラフのクエリを実行できます。 APIキーを持っていない場合は、開発やステージングに使用できる無料の一時的なクエリURLを使って、自分のdappからクエリを実行できます。 フロントエンドアプリケーションからサブグラフを照会する方法については、[こちら](/developer/querying-from-your-app)の説明をご覧ください。 +You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). -## ホスティングサービス +## Hosted Service -### 1. Graph CLIのインストール +### 1. Install the Graph CLI -"Graph CLI "はnpmパッケージなので、使用するには`npm`または `yarn`がインストールされていなければなりません。 +"The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. ```sh # NPM @@ -127,39 +127,39 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. サブグラフの初期化 +### 2. Initialize your Subgraph -- 既存のコントラクトからサブグラフを初期化します。 +- Initialize your subgraph from an existing contract. ```sh $ graph init --product hosted-service --from-contract
``` -- サブグラフの名前を聞かれます。 形式は`/`です。 例:`graphprotocol/examplesubgraph` +- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` -- 例題から初期化したい場合は、以下のコマンドを実行します。 +- If you'd like to initialize from an example, run the command below: ```sh $ graph init --product hosted-service --from-example ``` -- 例の場合、サブグラフはDani GrantによるGravityコントラクトに基づいており、ユーザーのアバターを管理し、アバターが作成または更新されるたびに`NewGravatar`または`UpdateGravatar`イベントを発行します。 +- In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -### 3. サブグラフの作成 +### 3. Write your Subgraph -先ほどのコマンドで、サブグラフを作成するための足場ができました。 サブグラフに変更を加える際には、主に3つのファイルを使用します: +The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: -- マニフェスト (subgraph.yaml) - マニフェストは、サブグラフがインデックスするデータソースを定義します。 -- スキーマ (schema.graphql) - GraphQLスキーマは、サブグラフから取得したいデータを定義します。 -- AssemblyScript Mappings (mapping.ts) - データソースからのデータを、スキーマで定義されたエンティティに変換するコードです。 +- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index +- Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph +- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema -サブグラフの書き方の詳細については、 [Create a Subgraph](/developer/create-subgraph-hosted) を参照してください。 +For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. サブグラフのデプロイ +### 4. Deploy your Subgraph -- Github アカウントを使用して[Hosted Service](https://thegraph.com/hosted-service/) にサインインします。 -- 「Add Subgraph」をクリックし、必要な情報を入力します。 手順2と同じサブグラフ名を使用します。 -- サブグラフのフォルダでcodegenを実行します。 +- Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account +- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. +- Run codegen in the subgraph folder ```sh # NPM @@ -169,16 +169,16 @@ $ npm run codegen $ yarn codegen ``` -- アクセストークンを追加して、サブグラフをデプロイします。 アクセストークンは、ダッシュボードのHosted Serviceにあります。 +- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. ```sh $ graph auth --product hosted-service $ graph deploy --product hosted-service / ``` -### 5. ログの確認 +### 5. Check your logs -エラーが発生した場合は、ログを確認してください。 サブグラフが失敗している場合は、 [GraphiQL Playground](https://graphiql-online.com/) を使ってサブグラフの健全性をクエリすることができます。 [このエンドポイント](https://api.thegraph.com/index-node/graphql) を使用します。 なお、以下のクエリを活用して、サブグラフのデプロイメントIDを入力することができます。 この場合、 `Qm...` がデプロイメントIDです(これはSubgraphページの**Details**に記載されています)。 以下のクエリは、サブグラフが失敗したときに教えてくれるので、適宜デバッグすることができます: +The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: ```sh { @@ -222,6 +222,6 @@ $ graph deploy --product hosted-service / } ``` -### 6. サブグラフのクエリ +### 6. Query your Subgraph -[こちらの手順](/hosted-service/query-hosted-service)に従って、ホステッドサービスでサブグラフをクエリします。 +Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. From f1f39d5551c064329dec4b7b4b5493e116517c6c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:05 -0500 Subject: [PATCH 039/241] New translations quick-start.mdx (Chinese Simplified) --- pages/zh/developer/quick-start.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/zh/developer/quick-start.mdx b/pages/zh/developer/quick-start.mdx index 5c07399604fd..398321403236 100644 --- a/pages/zh/developer/quick-start.mdx +++ b/pages/zh/developer/quick-start.mdx @@ -7,9 +7,9 @@ This guide will quickly take you through how to initialize, create, and deploy y - **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** - **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) -## Subgraph Studio +## 子图工作室 -### 1. 安装Graph CLI +### 1. Install the Graph CLI The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. @@ -113,9 +113,9 @@ The logs should tell you if there are any errors. If your subgraph is failing, y You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). -## Hosted Service +## 托管服务 -### 1. 安装Graph CLI +### 1. Install the Graph CLI "The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. From 3285b497af6c1c454c430d0d1d2d89e21c62f877 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:07 -0500 Subject: [PATCH 040/241] New translations deploy-subgraph-hosted.mdx (Spanish) --- .../hosted-service/deploy-subgraph-hosted.mdx | 82 +++++++++---------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/pages/es/hosted-service/deploy-subgraph-hosted.mdx b/pages/es/hosted-service/deploy-subgraph-hosted.mdx index 5b5c2dacade7..bdc532e205e4 100644 --- a/pages/es/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/es/hosted-service/deploy-subgraph-hosted.mdx @@ -1,56 +1,56 @@ --- -title: Despliega un Subgrafo en el Servicio Alojado +title: Deploy a Subgraph to the Hosted Service --- -Si aún no lo has comprobado, revisa cómo escribir los archivos que componen un [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) y cómo instalar el [Graph CLI](https://github.com/graphprotocol/graph-cli) para generar el código para tu subgrafo. Ahora, es el momento de desplegar tu subgrafo en el Servicio Alojado, también conocido como Hosted Service. +If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. -## Crear una cuenta en el Servicio Alojado +## Create a Hosted Service account -Antes de utilizar el Servicio Alojado, crea una cuenta en nuestro Servicio Alojado. Para ello necesitarás una cuenta [Github](https://github.com/); si no tienes una, debes crearla primero. A continuación, navega hasta el [Hosted Service](https://thegraph.com/hosted-service/), haz clic en el botón _'Sign up with Github'_ y completa el flujo de autorización de Github. +Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. -## Guardar el Token de Acceso +## Store the Access Token -Luego de crear la cuenta, navega a tu [dashboard](https://thegraph.com/hosted-service/dashboard). Copia el token de acceso que aparece en el dashboard y ejecuta `graph auth --product hosted-service `. Esto almacenará el token de acceso en tu computadora. Sólo tienes que hacerlo una vez, o si alguna vez regeneras el token de acceso. +After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. -## Crear un Subgrafo en el Servicio Alojado +## Create a Subgraph on the Hosted Service -Antes de desplegar el subgrafo, es necesario crearlo en The Graph Explorer. Ve a [dashboard](https://thegraph.com/hosted-service/dashboard) y haz clic en el botón _'Add Subgraph'_ y completa la información siguiente según corresponda: +Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: -**Image** - Selecciona una imagen que se utilizará como imagen de vista previa y miniatura para el subgrafo. +**Image** - Select an image to be used as a preview image and thumbnail for the subgraph. -**Subgraph Name** -Junto con el nombre de la cuenta con la que se crea el subgrafo, esto también definirá el nombre de estilo `account-name/subgraph-name` utilizado para los despliegues y los endpoints de GraphQL. _Este campo no puede ser cambiado posteriormente._ +**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ -**Account** - La cuenta con la que se crea el subgrafo. Puede ser la cuenta de un individuo o de una organización. _Los Subgrafos no pueden ser movidos entre cuentas posteriormente._ +**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ -**Subtitle** - Texto que aparecerá en las tarjetas del subgrafo. +**Subtitle** - Text that will appear in subgraph cards. -**Description** - Descripción del subgrafo, visible en la página de detalles del subgrafo. +**Description** - Description of the subgraph, visible on the subgraph details page. -**GitHub URL** Enlace al repositorio de subgrafos en GitHub. +**GitHub URL** - Link to the subgraph repository on GitHub. -**Hide** - Al activar esta opción se oculta el subgrafo en the Graph Explorer. +**Hide** - Switching this on hides the subgraph in the Graph Explorer. -Después de guardar el nuevo subgrafo, se te muestra una pantalla con ayuda sobre cómo instalar the Graph CLI, cómo generar el andamiaje para un nuevo subgrafo, y cómo desplegar tu subgrafo. Los dos primeros pasos se trataron en la sección [Definir un Subgrafo](/developer/define-subgraph-hosted). +After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). -## Desplegar un Subgrupo en el Servicio Alojado +## Deploy a Subgraph on the Hosted Service -El despliegue de tu subgrafo subirá los archivos del subgrafo que has construido con `yarn build` a IPFS y le dirá a Graph Explorer que empiece a indexar tu subgrafo usando estos archivos. +Deploying your subgraph will upload the subgraph files that you've built with `yarn build` to IPFS and tell the Graph Explorer to start indexing your subgraph using these files. -El subgrafo lo despliegas ejecutando `yarn deploy` +You deploy the subgraph by running `yarn deploy` -Después de desplegar el subgrafo, The Graph Explorer pasará a mostrar el estado de sincronización de tu subgrafo. Dependiendo de la cantidad de datos y del número de eventos que haya que extraer de los bloques históricos de Ethereum, empezando por el bloque génesis, la sincronización puede tardar desde unos minutos hasta varias horas. El estado del subgrafo cambia a `Synced` una vez que the Graph Node ha extraído todos los datos de los bloques históricos. The Graph Node continuará inspeccionando los bloques de Ethereum para tu subgrafo a medida que estos bloques sean minados. +After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. -## Re-Desplegar un Subgrafo +## Redeploying a Subgraph -Cuando hagas cambios en la definición de tu subgrafo, por ejemplo para arreglar un problema en los mapeos de entidades, ejecuta de nuevo el comando `yarn deploy` anterior para desplegar la versión actualizada de tu subgrafo. Cualquier actualización de un subgrafo requiere que Graph Node reindexe todo tu subgrafo, de nuevo empezando por el bloque génesis. +When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. -Si tu subgrafo previamente desplegado está todavía en estado `Syncing`, será inmediatamente reemplazado por la nueva versión desplegada. Si el subgrafo previamente desplegado ya está completamente sincronizado, Graph Node marcará la nueva versión desplegada como `Pending Version`, la sincronizará en segundo plano, y sólo reemplazará la versión actualmente desplegada por la nueva una vez que la sincronización de la nueva versión haya terminado. Esto asegura que tienes un subgrafo con el que trabajar mientras la nueva versión se sincroniza. +If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. -### Desplegar el subgrafo en múltiples redes Ethereum +### Deploying the subgraph to multiple Ethereum networks -En algunos casos, querrás desplegar el mismo subgrafo en múltiples redes Ethereum sin duplicar todo su código. El principal desafío que supone esto es que las direcciones de los contratos en estas redes son diferentes. Una solución que permite parametrizar aspectos como las direcciones de los contratos es generar partes de los mismos mediante un sistema de plantillas como [Mustache](https://mustache.github.io/) o [Handlebars](https://handlebarsjs.com/). +In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -Para ilustrar este enfoque, supongamos que un subgrafo debe desplegarse en mainnet y Ropsten utilizando diferentes direcciones de contrato. Entonces podrías definir dos archivos de configuración que proporcionen las direcciones para cada red: +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -59,7 +59,7 @@ Para ilustrar este enfoque, supongamos que un subgrafo debe desplegarse en mainn } ``` -y +and ```json { @@ -68,7 +68,7 @@ y } ``` -Junto con eso, sustituirías el nombre de la red y las direcciones en el manifiesto con un marcador de posición variable `{{network}}` y `{{address}}` y renombra el manifiesto a e.g. `subgraph.template.yaml`: +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: ```yaml # ... @@ -85,7 +85,7 @@ dataSources: kind: ethereum/events ``` -Para generar un manifiesto a cualquiera de las dos redes, podrías añadir dos comandos adicionales a `package.json` junto con una dependencia en `mustache`: +In order generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: ```json { @@ -102,7 +102,7 @@ Para generar un manifiesto a cualquiera de las dos redes, podrías añadir dos c } ``` -Para desplegar este subgrafo para mainnet o Ropsten, sólo tienes que ejecutar uno de los dos comandos siguientes: +To deploy this subgraph for mainnet or Ropsten you would now simply run one of the two following commands: ```sh # Mainnet: @@ -112,15 +112,15 @@ yarn prepare:mainnet && yarn deploy yarn prepare:ropsten && yarn deploy ``` -Un ejemplo práctico de esto se puede encontrar [aquí](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). -**Nota:** Este enfoque también puede aplicarse a situaciones más complejas, en las que es necesario sustituir más que las direcciones de los contratos y los nombres de las redes o en las que también se generan mapeos o ABIs a partir de plantillas. +**Note:** This approach can also be applied more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -## Comprobar de la fortaleza del subgrafo +## Checking subgraph health -Si un subgrafo se sincroniza con éxito, es una buena señal de que seguirá funcionando bien para siempre. Sin embargo, los nuevos disparadores en la cadena pueden hacer que tu subgrafo se encuentre con una condición de error no probada o puede empezar a retrasarse debido a problemas de rendimiento o problemas con los operadores de nodos. +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node expone un endpoint graphql que puedes consultar para comprobar el estado de tu subgrafo. En el Servicio Alojado, está disponible en `https://api.thegraph.com/index-node/graphql`. En el nodo local está disponible por default en el puerto `8030/graphql`. El esquema completo para este endpoint se puede encontrar [aquí](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). A continuación se muestra un ejemplo de consulta que comprueba el estado de la versión actual de un subgrafo: +Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: ```graphql { @@ -147,14 +147,14 @@ Graph Node expone un endpoint graphql que puedes consultar para comprobar el est } ``` -Esto te dará el `chainHeadBlock` que puedes comparar con el `latestBlock` de tu subgrafo para comprobar si se está retrasando. `synced` informa si el subgrafo ha alcanzado la cadena. `health` actualmente puede tomar los valores de `healthy` si no hubo errores, o `failed` si hubo un error que detuvo el progreso del subgrafo. En este caso puedes consultar el campo `fatalError` para conocer los detalles de este error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. -## Política de archivos de subgrafos +## Subgraph archive policy -El Servicio Alojado es un indexador gratuito de Graph Node. Los desarrolladores pueden desplegar subgrafos que indexen una serie de redes, que serán indexadas y estarán disponibles para su consulta a través de graphQL. +The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. -Para mejorar el rendimiento del servicio para los subgrafos activos, el Servicio Alojado archivará los subgrafos que estén inactivos. +To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs which are inactive. -**Un subgrafo se define como "inactivo" si se desplegó en el Servicio Alojado hace más de 45 días, y si ha recibido 0 consultas en los últimos 30 días.** +**A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 30 days.** -Los desarrolladores serán notificados por correo electrónico si uno de sus subgrafos ha sido marcado como inactivo 7 días antes de su eliminación. Si desean "activar" su subgrafo, pueden hacerlo realizando una consulta en el playground graphQL de su subgrafo. Los desarrolladores siempre pueden volver a desplegar un subgrafo archivado si lo necesitan de nuevo. +Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. From 4603efc2621f8dcc90dd8b231f605aaf9409ad3c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:11 -0500 Subject: [PATCH 041/241] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- .../hosted-service/deploy-subgraph-hosted.mdx | 82 +++++++++---------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index 5fe5ccacae0e..bdc532e205e4 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -1,56 +1,56 @@ --- -title: 将子图部署到托管服务上 +title: Deploy a Subgraph to the Hosted Service --- -如果您尚未查看,请先查看如何编写组成 [子图清单](/developer/create-subgraph-hosted#the-subgraph-manifest) 的文件以及如何安装 [Graph CLI](https://github.com/graphprotocol/graph-cli) 为您的子图生成代码。 现在,让我们将您的子图部署到托管服务上。 +If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. -## 创建托管服务帐户 +## Create a Hosted Service account -在使用托管服务之前,请先在我们的托管服务中创建一个帐户。 为此,您将需要一个 [Github](https://github.com/) 帐户;如果您还没有,您需要先创建一个账户。 然后,导航到 [托管服务](https://thegraph.com/hosted-service/), 单击 _'使用 Github 注册'_ 按钮并完成 Github 的授权流程。 +Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. -## 存储访问令牌 +## Store the Access Token -创建帐户后,导航到您的 [仪表板](https://thegraph.com/hosted-service/dashboard)。 复制仪表板上显示的访问令牌并运行 `graph auth --product hosted-service `。 这会将访问令牌存储在您的计算机上。 如果您不需要重新生成访问令牌,您就只需要这样做一次。 +After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. -## 在托管服务上创建子图 +## Create a Subgraph on the Hosted Service -在部署子图之前,您需要在 The Graph Explorer 中创建它。 转到 [仪表板](https://thegraph.com/hosted-service/dashboard) ,单击 _'添加子图'_ 按钮,并根据需要填写以下信息: +Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: -**图像** - 选择要用作子图的预览图和缩略图的图像。 +**Image** - Select an image to be used as a preview image and thumbnail for the subgraph. -**子图名称** - 子图名称连同下面将要创建的子图帐户名称,将定义用于部署和 GraphQL 端点的`account-name/subgraph-name`样式名称。 _此字段以后无法更改。_ +**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ -**帐户** - 创建子图的帐户。 这可以是个人或组织的帐户。 _以后不能在帐户之间移动子图。_ +**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ -**副标题** - 将出现在子图卡中的文本。 +**Subtitle** - Text that will appear in subgraph cards. -**描述** - 子图的描述,在子图详细信息页面上可见。 +**Description** - Description of the subgraph, visible on the subgraph details page. -**GitHub URL** - 存储在GitHub 上的子图代码的链接。 +**GitHub URL** - Link to the subgraph repository on GitHub. -**隐藏** - 打开此选项可隐藏Graph Explorer中的子图。 +**Hide** - Switching this on hides the subgraph in the Graph Explorer. -保存新子图后,您会看到一个屏幕,其中包含有关如何安装 Graph CLI、如何为新子图生成脚手架以及如何部署子图的帮助信息。 前面两部分在[定义子图](/developer/define-subgraph-hosted)中进行了介绍。 +After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). -## 在托管服务上部署子图 +## Deploy a Subgraph on the Hosted Service -一旦部署您的子图,您使用`yarn build` 命令构建的子图文件将被上传到 IPFS,并告诉 Graph Explorer 开始使用这些文件索引您的子图。 +Deploying your subgraph will upload the subgraph files that you've built with `yarn build` to IPFS and tell the Graph Explorer to start indexing your subgraph using these files. -您可以通过运行 `yarn deploy`来部署子图。 +You deploy the subgraph by running `yarn deploy` -部署子图后,Graph Explorer将切换到显示子图的同步状态。 根据需要从历史以太坊区块中提取的数据量和事件数量的不同,从创世区块开始,同步操作可能需要几分钟到几个小时。 一旦 Graph节点从历史区块中提取了所有数据,子图状态就会切换到`Synced`。 当新的以太坊区块出现时,Graph节点将继续按照子图的要求检查这些区块的信息。 +After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. -## 重新部署子图 +## Redeploying a Subgraph -更改子图定义后,例如:修复实体映射中的一个问题,再次运行上面的 `yarn deploy` 命令可以部署新版本的子图。 子图的任何更新都需要Graph节点再次从创世块开始重新索引您的整个子图。 +When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. -如果您之前部署的子图仍处于`Syncing`状态,系统则会立即将其替换为新部署的版本。 如果之前部署的子图已经完全同步,Graph节点会将新部署的版本标记为`Pending Version`,在后台进行同步,只有在新版本同步完成后,才会用新的版本替换当前部署的版本。 这样做可以确保在新版本同步时您仍然有子图可以使用。 +If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. -### 将子图部署到多个以太坊网络 +### Deploying the subgraph to multiple Ethereum networks -在某些情况下,您可能希望将相同的子图部署到多个以太坊网络,而无需复制其所有代码。 这样做的主要挑战是这些网络上的合约地址不同。 允许参数化合约地址等配置的一种解决方案是使用 [Mustache](https://mustache.github.io/)或 [Handlebars](https://handlebarsjs.com/)等模板系统。 +In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -为了说明这种方法,我们假设使用不同的合约地址将子图部署到主网和 Ropsten上。 您可以定义两个配置文件,为每个网络提供相应的地址: +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -59,7 +59,7 @@ title: 将子图部署到托管服务上 } ``` -和 +and ```json { @@ -68,7 +68,7 @@ title: 将子图部署到托管服务上 } ``` -除此之外,您可以用变量占位符 `{{network}}` 和 `{{address}}` 替换清单中的网络名称和地址,并将清单重命名为例如 `subgraph.template.yaml`: +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: ```yaml # ... @@ -85,7 +85,7 @@ dataSources: kind: ethereum/events ``` -为了给每个网络生成清单,您可以向 `package.json` 添加两个附加命令,以及对 `mustache` 的依赖项: +In order generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: ```json { @@ -102,7 +102,7 @@ dataSources: } ``` -要为主网或 Ropsten 部署此子图,您现在只需运行以下两个命令中的任意一个: +To deploy this subgraph for mainnet or Ropsten you would now simply run one of the two following commands: ```sh # Mainnet: @@ -112,15 +112,15 @@ yarn prepare:mainnet && yarn deploy yarn prepare:ropsten && yarn deploy ``` -您可以在[这里](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759)找到一个工作示例。 +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). -**注意:** 这种方法也可以应用在更复杂的情况下,例如:需要替换的不仅仅是合约地址和网络名称,或者还需要从模板生成映射或 ABI。 +**Note:** This approach can also be applied more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -## 检查子图状态 +## Checking subgraph health -如果子图成功同步,这是表明它将运行良好的一个好的信号。 但是,链上的新事件可能会导致您的子图遇到未经测试的错误环境,或者由于性能或节点方面的问题而开始落后于链上数据。 +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph 节点公开了一个 graphql 端点,您可以通过查询该端点来检查子图的状态。 在托管服务上,该端点的链接是 `https://api.thegraph.com/index-node/graphql`。 在本地节点上,默认情况下该端点在端口 `8030/graphql` 上可用。 该端点的完整数据模式可以在[此处](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql)找到。 这是一个检查子图当前版本状态的示例查询: +Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: ```graphql { @@ -147,14 +147,14 @@ Graph 节点公开了一个 graphql 端点,您可以通过查询该端点来 } ``` -这将为您提供 `chainHeadBlock`,您可以将其与子图上的 `latestBlock` 进行比较,以检查子图是否落后。 通过`synced`,可以了解子图是否与链上数据完全同步。 如果子图没有发生错误,`health` 将返回`healthy`,如果有一个错误导致子图的同步进度停止,那么 `health`将返回`failed` 。 在这种情况下,您可以检查 `fatalError` 字段以获取有关此错误的详细信息。 +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. -## 子图归档策略 +## Subgraph archive policy -托管服务是一个免费的Graph节点索引器。 开发人员可以部署索引一系列网络的子图,这些网络将被索引,并可以通过 graphQL 进行查询。 +The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. -为了提高活跃子图的服务性能,托管服务将归档不活跃的子图。 +To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs which are inactive. -**如果一个子图在 45 天前部署到托管服务,并且在过去 30 天内收到 0 个查询,则将其定义为“不活跃”。** +**A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 30 days.** -如果开发人员的一个子图被标记为不活跃,并将 7 天后被删除,托管服务会通过电子邮件通知开发者。 如果他们希望“激活”他们的子图,他们可以通过在其子图的托管服务 graphQL playground中发起查询来实现。 如果再次需要使用这个子图,开发人员也可以随时重新部署存档的子图。 +Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. From 03f80a19de3c9e423d6ede784e8dd5bdcb18d6e9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:12 -0500 Subject: [PATCH 042/241] New translations migrating-subgraph.mdx (Spanish) --- pages/es/hosted-service/migrating-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/hosted-service/migrating-subgraph.mdx b/pages/es/hosted-service/migrating-subgraph.mdx index eda54d1931ed..85f72f053b30 100644 --- a/pages/es/hosted-service/migrating-subgraph.mdx +++ b/pages/es/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## Introducción +## Introduction This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From 160020fbf1afbf70d4c556c153beaa0c5eda5bcf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:13 -0500 Subject: [PATCH 043/241] New translations migrating-subgraph.mdx (Arabic) --- pages/ar/hosted-service/migrating-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ar/hosted-service/migrating-subgraph.mdx b/pages/ar/hosted-service/migrating-subgraph.mdx index 9f314e8e9034..85f72f053b30 100644 --- a/pages/ar/hosted-service/migrating-subgraph.mdx +++ b/pages/ar/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## مقدمة +## Introduction This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From 6408863eb733cf3a49226ece263249b2d9600dbc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:15 -0500 Subject: [PATCH 044/241] New translations migrating-subgraph.mdx (Japanese) --- pages/ja/hosted-service/migrating-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ja/hosted-service/migrating-subgraph.mdx b/pages/ja/hosted-service/migrating-subgraph.mdx index 8d556f5644db..85f72f053b30 100644 --- a/pages/ja/hosted-service/migrating-subgraph.mdx +++ b/pages/ja/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## イントロダクション +## Introduction This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From 1f8e6c0a98fd86d295f208235b3a26605c252790 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:20 -0500 Subject: [PATCH 045/241] New translations graphql-api.mdx (Spanish) --- pages/es/developer/graphql-api.mdx | 120 ++++++++++++++--------------- 1 file changed, 60 insertions(+), 60 deletions(-) diff --git a/pages/es/developer/graphql-api.mdx b/pages/es/developer/graphql-api.mdx index 4513e9f5c724..f9cb6214fcd9 100644 --- a/pages/es/developer/graphql-api.mdx +++ b/pages/es/developer/graphql-api.mdx @@ -1,16 +1,16 @@ --- -title: API GraphQL +title: GraphQL API --- -Esta guía explica la API de consulta GraphQL que se utiliza para the Graph Protocol. +This guide explains the GraphQL Query API that is used for the Graph Protocol. -## Consultas +## Queries -En tu esquema de subgrafos defines tipos llamados `Entities`. Por cada tipo de `Entity`, se generará un campo `entity` y `entities` en el nivel superior del tipo `Query`. Ten en cuenta que no es necesario incluir `query` en la parte superior de la consulta `graphql` cuando se utiliza The Graph. +In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. -#### Ejemplos +#### Examples -Consulta de una única entidad `Token` definida en tu esquema: +Query for a single `Token` entity defined in your schema: ```graphql { @@ -21,9 +21,9 @@ Consulta de una única entidad `Token` definida en tu esquema: } ``` -**Nota:** Cuando se consulta una sola entidad, el campo `id` es obligatorio y debe ser un string. +**Note:** When querying for a single entity, the `id` field is required and it must be a string. -Consulta todas las entidades `Token`: +Query all `Token` entities: ```graphql { @@ -34,11 +34,11 @@ Consulta todas las entidades `Token`: } ``` -### Clasificación +### Sorting -Al consultar una colección, el parámetro `orderBy` puede utilizarse para ordenar por un atributo específico. Además, el `orderDirection` se puede utilizar para especificar la dirección de ordenación, `asc` para ascendente o `desc` para descendente. +When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. -#### Ejemplo +#### Example ```graphql { @@ -49,17 +49,17 @@ Al consultar una colección, el parámetro `orderBy` puede utilizarse para orden } ``` -### Paginación +### Pagination -Al consultar una colección, el parámetro `first` puede utilizarse para paginar desde el principio de la colección. Cabe destacar que el orden por defecto es por ID en orden alfanumérico ascendente, no por tiempo de creación. +When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. -Además, el parámetro `skip` puede utilizarse para saltar entidades y paginar. por ejemplo, `first:100` muestra las primeras 100 entidades y `first:100, skip:100` muestra las siguientes 100 entidades. +Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -Las consultas deben evitar el uso de valores de `skip` muy grandes, ya que suelen tener un rendimiento deficiente. Para recuperar un gran número de elementos, es mucho mejor para paginar recorrer las entidades basándose en un atributo, como se muestra en el último ejemplo. +Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. -#### Ejemplo +#### Example -Consulta los primeros 10 tokens: +Query the first 10 tokens: ```graphql { @@ -70,11 +70,11 @@ Consulta los primeros 10 tokens: } ``` -Para consultar grupos de entidades en medio de una colección, el parámetro `skip` puede utilizarse junto con el parámetro `first` para omitir un número determinado de entidades empezando por el principio de la colección. +To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. -#### Ejemplo +#### Example -Consulta 10 entidades `Token`, desplazadas 10 lugares desde el principio de la colección: +Query 10 `Token` entities, offset by 10 places from the beginning of the collection: ```graphql { @@ -85,9 +85,9 @@ Consulta 10 entidades `Token`, desplazadas 10 lugares desde el principio de la c } ``` -#### Ejemplo +#### Example -Si un cliente necesita recuperar un gran número de entidades, es mucho más eficaz basar las consultas en un atributo y filtrar por ese atributo. Por ejemplo, un cliente podría recuperar un gran número de tokens utilizando esta consulta: +If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: ```graphql { @@ -100,15 +100,15 @@ Si un cliente necesita recuperar un gran número de entidades, es mucho más efi } ``` -La primera vez, enviaría la consulta con `lastID = ""`, y para las siguientes peticiones pondría `lastID` al atributo `id` de la última entidad de la petición anterior. Este enfoque tendrá un rendimiento significativamente mejor que el uso de valores crecientes de `skip`. +The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. -### Filtro +### Filtering -Puedes utilizar el parámetro `where` en tus consultas para filtrar por diferentes propiedades. Puedes filtrar por múltiples valores dentro del parámetro `where`. +You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. -#### Ejemplo +#### Example -Desafíos de consulta con resultado `failed`: +Query challenges with `failed` outcome: ```graphql { @@ -122,9 +122,9 @@ Desafíos de consulta con resultado `failed`: } ``` -Puede utilizar sufijos como `_gt`, `_lte` para la comparación de valores: +You can use suffixes like `_gt`, `_lte` for value comparison: -#### Ejemplo +#### Example ```graphql { @@ -136,7 +136,7 @@ Puede utilizar sufijos como `_gt`, `_lte` para la comparación de valores: } ``` -Lista completa de sufijos de parámetros: +Full list of parameter suffixes: ```graphql _not @@ -154,17 +154,17 @@ _not_starts_with _not_ends_with ``` -Ten en cuenta que algunos sufijos sólo son compatibles con determinados tipos. Por ejemplo, `Boolean` solo admite `_not`, `_in`, y `_not_in`. +Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. -### Consultas sobre Time-travel +### Time-travel queries -Puedes consultar el estado de tus entidades no sólo para el último bloque, que es el predeterminado, sino también para un bloque arbitrario en el pasado. El bloque en el que debe producirse una consulta puede especificarse por su número de bloque o su hash de bloque incluyendo un argumento `block` en los campos de nivel superior de las consultas. +You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -El resultado de una consulta de este tipo no cambiará con el tiempo, es decir, la consulta en un determinado bloque pasado devolverá el mismo resultado sin importar cuándo se ejecute, con la excepción de que si se consulta en un bloque muy cercano al encabezado de la cadena de Ethereum, el resultado podría cambiar si ese bloque resulta no estar en la cadena principal y la cadena se reorganiza. Una vez que un bloque puede considerarse definitivo, el resultado de la consulta no cambiará. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Ten en cuenta que la implementación está sujeta a ciertas limitaciones que podrían violar estas garantías. La implementación no siempre puede decir que un hash de bloque dado no está en la cadena principal en absoluto, o que el resultado de una consulta por hash de bloque para un bloque que no puede considerarse final todavía podría estar influenciado por una reorganización de bloque que se ejecuta simultáneamente con la consulta. No afectan a los resultados de las consultas por el hash del bloque cuando éste es definitivo y se sabe que está en la cadena principal. [ Esta cuestión](https://github.com/graphprotocol/graph-node/issues/1405) explica con detalle cuáles son estas limitaciones. +Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. -#### Ejemplo +#### Example ```graphql { @@ -178,9 +178,9 @@ Ten en cuenta que la implementación está sujeta a ciertas limitaciones que pod } ``` -Esta consulta devolverá las entidades `Challenge`, y sus entidades asociadas `Application`, tal y como existían directamente después de procesar el bloque número 8.000.000. +This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. -#### Ejemplo +#### Example ```graphql { @@ -194,26 +194,26 @@ Esta consulta devolverá las entidades `Challenge`, y sus entidades asociadas `A } ``` -Esta consulta devolverá las entidades `Challenge`, y sus entidades asociadas `Application`, tal y como existían directamente después de procesar el bloque con el hash dado. +This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. -### Consultas de Búsqueda de Texto Completo +### Fulltext Search Queries -Los campos de consulta de búsqueda de texto completo proporcionan una API de búsqueda de texto expresiva que puede añadirse al esquema de subgrafos y personalizarse. Consulta [Definiendo los campos de búsqueda de texto completo](/developer/create-subgraph-hosted#defining-fulltext-search-fields) para añadir la búsqueda de texto completo a tu subgrafo. +Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. -Las consultas de búsqueda de texto completo tienen un campo obligatorio, `text`, para suministrar los términos de búsqueda. Hay varios operadores especiales de texto completo que se pueden utilizar en este campo de búsqueda de `text`. +Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. -Operadores de búsqueda de texto completo: +Fulltext search operators: -| Símbolo | Operador | Descripción | -| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `&` | `And` | Para combinar varios términos de búsqueda en un filtro para entidades que incluyen todos los términos proporcionados | -| | | `Or` | Las consultas con varios términos de búsqueda separados por o el operador devolverá todas las entidades que coincidan con cualquiera de los términos proporcionados | -| `<->` | `Follow by` | Especifica la distancia entre dos palabras. | -| `:*` | `Prefix` | Utilice el término de búsqueda del prefijo para encontrar palabras cuyo prefijo coincida (se requieren 2 caracteres.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | -#### Ejemplos +#### Examples -Utilizando el operador `or`, esta consulta filtrará las entidades del blog que tengan variaciones de "anarchism" o de "crumpet" en sus campos de texto completo. +Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. ```graphql { @@ -226,7 +226,7 @@ Utilizando el operador `or`, esta consulta filtrará las entidades del blog que } ``` -El operador `follow by` especifica unas palabras a una distancia determinada en los documentos de texto completo. La siguiente consulta devolverá todos los blogs con variaciones de "decentralize" seguidas de "philosophy" +The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" ```graphql { @@ -239,7 +239,7 @@ El operador `follow by` especifica unas palabras a una distancia determinada en } ``` -Combina los operadores de texto completo para crear filtros más complejos. Con un operador de búsqueda de pretexto combinado con un follow by esta consulta de ejemplo coincidirá con todas las entidades del blog con palabras que empiecen por "lou" seguidas de "music". +Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". ```graphql { @@ -252,16 +252,16 @@ Combina los operadores de texto completo para crear filtros más complejos. Con } ``` -## Esquema +## Schema -El esquema de tu fuente de datos, es decir, los tipos de entidad, los valores y las relaciones que están disponibles para la consulta, se definen a través del [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -Los esquemas de GraphQL suelen definir tipos raíz para `queries`, `subscriptions` y `mutations`. The Graph solo admite `queries`. El tipo de `Query` raíz de tu subgrafo se genera automáticamente a partir del esquema GraphQL que se incluye en el manifiesto de tu subgrafo. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. -> **Nota:** Nuestra API no expone mutaciones porque se espera que los desarrolladores emitan transacciones directamente contra la blockchain subyacente desde sus aplicaciones. +> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. -### Entidades +### Entities -Todos los tipos GraphQL con directivas `@entity` en tu esquema serán tratados como entidades y deben tener un campo `ID`. +All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. -> **Nota:** Actualmente, todos los tipos de tu esquema deben tener una directiva `@entity`. En el futuro, trataremos los tipos sin una directiva `@entity` como objetos de valor, pero esto todavía no está soportado. +> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. From a0c0e82070a3ac9852780337031c831736054e30 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:21 -0500 Subject: [PATCH 046/241] New translations graphql-api.mdx (Arabic) --- pages/ar/developer/graphql-api.mdx | 116 ++++++++++++++--------------- 1 file changed, 58 insertions(+), 58 deletions(-) diff --git a/pages/ar/developer/graphql-api.mdx b/pages/ar/developer/graphql-api.mdx index 15ab979dacff..f9cb6214fcd9 100644 --- a/pages/ar/developer/graphql-api.mdx +++ b/pages/ar/developer/graphql-api.mdx @@ -2,15 +2,15 @@ title: GraphQL API --- -يشرح هذا الدليل GraphQL Query API المستخدمة في بروتوكول Graph. +This guide explains the GraphQL Query API that is used for the Graph Protocol. -## الاستعلامات +## Queries -في مخطط الـ subgraph الخاص بك ، يمكنك تعريف أنواع وتسمى `Entities`. لكل نوع من `Entity` ، سيتم إنشاء حقل `entity` و `entities` في المستوى الأعلى من نوع `Query`. لاحظ أنه لا يلزم تضمين ` query ` أعلى استعلام ` graphql ` عند استخدام The Graph. +In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. -#### أمثلة +#### Examples -الاستعلام عن كيان `Token` واحد معرف في مخططك: +Query for a single `Token` entity defined in your schema: ```graphql { @@ -21,9 +21,9 @@ title: GraphQL API } ``` -** ملاحظة: ** عند الاستعلام عن كيان واحد ، فإن الحقل ` id ` يكون مطلوبا ويجب أن يكون string. +**Note:** When querying for a single entity, the `id` field is required and it must be a string. -الاستعلام عن جميع كيانات `Token`: +Query all `Token` entities: ```graphql { @@ -34,11 +34,11 @@ title: GraphQL API } ``` -### الفرز +### Sorting -عند الاستعلام عن مجموعة ، يمكن استخدام البارامتر `orderBy` للترتيب حسب صفة معينة. بالإضافة إلى ذلك ، يمكن استخدام ` OrderDirection ` لتحديد اتجاه الفرز ،`asc` للترتيب التصاعدي أو `desc` للترتيب التنازلي. +When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. -#### مثال +#### Example ```graphql { @@ -49,17 +49,17 @@ title: GraphQL API } ``` -### ترقيم الصفحات +### Pagination -عند الاستعلام عن مجموعة ، يمكن استخدام البارامتر `first` لترقيم الصفحات من بداية المجموعة. من الجدير بالذكر أن ترتيب الفرز الافتراضي يكون حسب الـ ID بترتيب رقمي تصاعدي ، وليس حسب وقت الإنشاء. +When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. -علاوة على ذلك ، يمكن استخدام البارامتر ` skip ` لتخطي الكيانات وترقيم الصفحات. على سبيل المثال `first:100` يعرض أول 100 عنصر و `first:100, skip:100` يعرض 100 عنصر التالية. +Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -الاستعلامات يجب أن تتجنب استخدام قيم `skip` كبيرة جدا نظرا لأنها تؤدي بشكل عام أداء ضعيفا. لجلب عدد كبير من العناصر ، من الأفضل تصفح الكيانات بناء على صفة كما هو موضح في المثال الأخير. +Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. -#### مثال +#### Example -استعلم عن أول 10 توكن: +Query the first 10 tokens: ```graphql { @@ -70,11 +70,11 @@ title: GraphQL API } ``` -للاستعلام عن مجموعات الكيانات في منتصف المجموعة ، يمكن استخدام البارامتر `skip` بالاصافة لبارامتر `first` لتخطي عدد محدد من الكيانات بدءا من بداية المجموعة. +To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. -#### مثال +#### Example -الاستعلام عن 10 كيانات `Token` ،بإزاحة 10 أماكن من بداية المجموعة: +Query 10 `Token` entities, offset by 10 places from the beginning of the collection: ```graphql { @@ -85,9 +85,9 @@ title: GraphQL API } ``` -#### مثال +#### Example -إذا احتاج العميل إلى جلب عدد كبير من الكيانات ، فمن الأفضل أن تستند الاستعلامات إلى إحدى الصفات والفلترة حسب تلك الصفة. على سبيل المثال ، قد يجلب العميل عددا كبيرا من التوكن باستخدام هذا الاستعلام: +If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: ```graphql { @@ -100,15 +100,15 @@ title: GraphQL API } ``` -في المرة الأولى ، سيتم إرسال الاستعلام مع `lastID = ""` ، وبالنسبة للطلبات اللاحقة ، سيتم تعيين `lastID` إلى صفة `id` للكيان الأخير في الطلب السابق. أداء هذا الأسلوب أفضل بكثير من استخدام زيادة قيم `skip`. +The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. -### الفلترة +### Filtering -يمكنك استخدام البارامتر `where` في الاستعلام لتصفية الخصائص المختلفة. يمكنك الفلترة على قيم متعددة ضمن البارامتر `where`. +You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. -#### مثال +#### Example -تحديات الاسعلام مع نتيجة `failed`: +Query challenges with `failed` outcome: ```graphql { @@ -122,9 +122,9 @@ title: GraphQL API } ``` -يمكنك استخدام لواحق مثل ` _gt ` ، ` _lte ` لمقارنة القيم: +You can use suffixes like `_gt`, `_lte` for value comparison: -#### مثال +#### Example ```graphql { @@ -136,7 +136,7 @@ title: GraphQL API } ``` -القائمة الكاملة للواحق البارامترات: +Full list of parameter suffixes: ```graphql _not @@ -154,17 +154,17 @@ _not_starts_with _not_ends_with ``` -يرجى ملاحظة أن بعض اللواحق مدعومة فقط لأنواع معينة. على سبيل المثال ، ` Boolean ` يدعم فقط ` _not ` و ` _in ` و ` _not_in `. +Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. ### Time-travel queries -يمكنك الاستعلام عن حالة الكيانات الخاصة بك ليس فقط للكتلة الأخيرة ، والتي هي افتراضيا ، ولكن أيضا لكتلة اعتباطية في الماضي. يمكن تحديد الكتلة التي يجب أن يحدث فيها الاستعلام إما عن طريق رقم الكتلة أو hash الكتلة الخاص بها عن طريق تضمين وسيطة ` block ` في حقول المستوى الأعلى للاستعلامات. +You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -لن تتغير نتيجة مثل هذا الاستعلام بمرور الوقت ، أي أن الاستعلام في كتلة سابقة معينة سيعيد نفس النتيجة بغض النظر عن وقت تنفيذها ، باستثناء أنه إذا قمت بالاستعلام في كتلة قريبة جدا من رأس سلسلة Ethereum ، قد تتغير النتيجة إذا تبين أن هذه الكتلة ليست في السلسلة الرئيسية وتمت إعادة تنظيم السلسلة. بمجرد اعتبار الكتلة نهائية ، لن تتغير نتيجة الاستعلام. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -لاحظ أن التنفيذ الحالي لا يزال يخضع لقيود معينة قد تنتهك هذه الضمانات. لا يمكن للتنفيذ دائما أن يخبرنا أن hash كتلة معينة ليست في السلسلة الرئيسية ، أو أن نتيجة استعلام لكتلة عن طريق hash الكتلة لا يمكن اعتبارها نهائية ومع ذلك قد تتأثر بإعادة تنظيم الكتلة التي تعمل بشكل متزامن مع الاستعلام. لا تؤثر نتائج الاستعلامات عن طريق hash الكتلة عندما تكون الكتلة نهائية ومعروفة بأنها موجودة في السلسلة الرئيسية. [ تشرح هذه المشكلة ](https://github.com/graphprotocol/graph-node/issues/1405) ماهية هذه القيود بالتفصيل. +Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. -#### مثال +#### Example ```graphql { @@ -178,9 +178,9 @@ _not_ends_with } ``` -سيعود هذا الاستعلام بكيانات ` Challenge ` وكيانات ` Application ` المرتبطة بها ، كما كانت موجودة مباشرة بعد معالجة رقم الكتلة 8،000،000. +This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. -#### مثال +#### Example ```graphql { @@ -194,26 +194,26 @@ _not_ends_with } ``` -سيعود هذا الاستعلام بكيانات ` Challenge ` وكيانات ` Application ` المرتبطة بها ، كما كانت موجودة مباشرة بعد معالجة الكتلة باستخدام hash المحددة. +This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. -### استعلامات بحث النص الكامل +### Fulltext Search Queries -حقول استعلام البحث عن نص كامل توفر API للبحث عن نص تعبيري يمكن إضافتها إلى مخطط الـ subgraph وتخصيصها. راجع [ تعريف حقول بحث النص الكامل ](/developer/create-subgraph-hosted#defining-fulltext-search-fields) لإضافة بحث نص كامل إلى الـ subgraph الخاص بك. +Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. -استعلامات البحث عن النص الكامل لها حقل واحد مطلوب ، وهو ` text ` ، لتوفير عبارة البحث. تتوفر العديد من عوامل النص الكامل الخاصة لاستخدامها في حقل البحث ` text `. +Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. -عوامل تشغيل البحث عن النص الكامل: +Fulltext search operators: -| رمز | عامل التشغيل | الوصف | -| ----------- | ------------ | --------------------------------------------------------------------------------------------------------------------------- | -| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة | -| | | `Or` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة | -| `<->` | `Follow by` | يحدد المسافة بين كلمتين. | -| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | -#### أمثلة +#### Examples -باستخدام العامل ` or` ، سيقوم الاستعلام هذا بتصفية blog الكيانات التي تحتوي على أشكال مختلفة من "anarchism" أو "crumpet" في حقول النص الكامل الخاصة بها. +Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. ```graphql { @@ -226,7 +226,7 @@ _not_ends_with } ``` -العامل ` follow by ` يحدد الكلمات بمسافة محددة عن بعضها في مستندات النص-الكامل. الاستعلام التالي سيعيد جميع الـ blogs التي تحتوي على أشكال مختلفة من "decentralize" متبوعة بكلمة "philosophy" +The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" ```graphql { @@ -239,7 +239,7 @@ _not_ends_with } ``` -اجمع بين عوامل تشغيل النص-الكامل لعمل فلترة أكثر تعقيدا. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". ```graphql { @@ -252,16 +252,16 @@ _not_ends_with } ``` -## المخطط +## Schema -يتم تعريف مخطط مصدر البيانات الخاص بك - أي أنواع الكيانات والقيم والعلاقات المتاحة للاستعلام - من خلال [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -مخططات GraphQL تعرف عموما أنواع الجذر لـ `queries`, و `subscriptions` و`mutations`. The Graph يدعم فقط `queries`. يتم إنشاء نوع الجذر `Query` لـ subgraph تلقائيا من مخطط GraphQL المضمن في subgraph manifest الخاص بك. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. -> ** ملاحظة: ** الـ API الخاصة بنا لا تعرض الـ mutations لأنه يُتوقع من المطورين إصدار إجراءات مباشرة لـblockchain الأساسي من تطبيقاتهم. +> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. -### الكيانات +### Entities -سيتم التعامل مع جميع أنواع GraphQL التي تحتوي على توجيهات `entity@ ` في مخططك على أنها كيانات ويجب أن تحتوي على حقل ` ID `. +All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. -> ** ملاحظة: ** في الوقت الحالي ، يجب أن تحتوي جميع الأنواع في مخططك على توجيه `entity@ `. في المستقبل ، سنتعامل مع الأنواع التي لا تحتوي على التوجيه `entity@ ` ككائنات، لكن هذا غير مدعوم حتى الآن. +> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. From 45e1889491c36228ffac8d1985567d6cc505578b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:25 -0500 Subject: [PATCH 047/241] New translations matchstick.mdx (Spanish) --- pages/es/developer/matchstick.mdx | 88 +++++++++++++++---------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/pages/es/developer/matchstick.mdx b/pages/es/developer/matchstick.mdx index 2cd0e327579d..3cf1ec761bb9 100644 --- a/pages/es/developer/matchstick.mdx +++ b/pages/es/developer/matchstick.mdx @@ -1,16 +1,16 @@ --- -title: Marco de Unit Testing +title: Unit Testing Framework --- -Matchstick es un marco de unit testing, desarrollado por [LimeChain](https://limechain.tech/), que permite a los desarrolladores de subgrafos probar su lógica de mapeo en un entorno sandbox y desplegar sus subgrafos con confianza! +Matchstick is a unit testing framework, developed by [LimeChain](https://limechain.tech/), that enables subgraph developers to test their mapping logic in a sandboxed environment and deploy their subgraphs with confidence! -Sigue la [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) para instalar. Ahora, puede pasar a escribir tu primera unit test. +Follow the [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) to install. Now, you can move on to writing your first unit test. -## Escribe una Unit Test +## Write a Unit Test -Veamos cómo sería una unit test sencilla, utilizando el Gravatar [Example Subgraph](https://github.com/graphprotocol/example-subgraph). +Let's see how a simple unit test would look like, using the Gravatar [Example Subgraph](https://github.com/graphprotocol/example-subgraph). -Suponiendo que tenemos la siguiente función handler (junto con dos funciones de ayuda para facilitarnos la vida): +Assuming we have the following handler function (along with two helper functions to make our life easier): ```javascript export function handleNewGravatar(event: NewGravatar): void { @@ -61,7 +61,7 @@ export function createNewGravatarEvent( } ``` -Primero tenemos que crear un archivo de prueba en nuestro proyecto. Hemos elegido el nombre `gravity.test.ts`. En el archivo recién creado tenemos que definir una función llamada `runTests()`. Es importante que la función tenga ese nombre exacto. Este es un ejemplo de cómo podrían ser nuestras pruebas: +We first have to create a test file in our project. We have chosen the name `gravity.test.ts`. In the newly created file we need to define a function named `runTests()`. It is important that the function has that exact name. This is an example of how our tests might look like: ```typescript import { clearStore, test, assert } from 'matchstick-as/assembly/index' @@ -95,27 +95,27 @@ export function runTests(): void { } ``` -¡Es mucho para desempacar! En primer lugar, una cosa importante a notar es que estamos importando cosas de `matchstick-as`, nuestra biblioteca de ayuda de AssemblyScript (distribuida como un módulo npm). Puedes encontrar el repositorio [aquí](https://github.com/LimeChain/matchstick-as). `matchstick-as` nos proporciona útiles métodos de prueba y también define la función `test()` que utilizaremos para construir nuestros bloques de prueba. El resto es bastante sencillo: esto es lo que ocurre: +That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: -- Estamos configurando nuestro estado inicial y añadiendo una entidad Gravatar personalizada; -- Definimos dos objetos de evento `NewGravatar` junto con sus datos, utilizando la función `createNewGravatarEvent()`; -- Estamos llamando a los métodos handlers de esos eventos - `handleNewGravatars()` y pasando la lista de nuestros eventos personalizados; -- Hacemos valer el estado del almacén. ¿Cómo funciona eso? - Pasamos una combinación única de tipo de Entidad e id. A continuación, comprobamos un campo específico de esa Entidad y afirmamos que tiene el valor que esperamos que tenga. Hacemos esto tanto para la Entidad Gravatar inicial que añadimos al almacén, como para las dos entidades Gravatar que se añaden cuando se llama a la función del handler; -- Y por último - estamos limpiando el almacén usando `clearStore()` para que nuestra próxima prueba pueda comenzar con un objeto almacén fresco y vacío. Podemos definir tantos bloques de prueba como queramos. +- We're setting up our initial state and adding one custom Gravatar entity; +- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; +- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; +- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; +- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. -Ya está: ¡hemos creado nuestra primera prueba! 👏 +There we go - we've created our first test! 👏 -❗ **IMPORTANTE:** _ Para que las pruebas funcionen, necesitamos exportar la función `runTests()` en nuestro archivo de mapeo. No se utilizará allí, pero la declaración de exportación tiene que estar allí para que pueda ser recogida por Rust más tarde al ejecutar las pruebas._ +❗ **IMPORTANT:** _In order for the tests to work, we need to export the `runTests()` function in our mappings file. It won't be used there, but the export statement has to be there so that it can get picked up by Rust later when running the tests._ -Puedes exportar la función wrapper de las pruebas en tu archivo de mapeo de la siguiente manera: +You can export the tests wrapper function in your mappings file like this: ``` export { runTests } from "../tests/gravity.test.ts"; ``` -❗ **IMPORTANTE:** _Actualmente hay un problema con el uso de Matchstick cuando se despliega tu subgrafo. Por favor, sólo usa Matchstick para pruebas locales, y elimina/comenta esta línea (`export { runTests } de "../tests/gravity.test.ts"`) una vez que hayas terminado. Esperamos resolver este problema en breve, ¡disculpa las molestias!_ +❗ **IMPORTANT:** _Currently there's an issue with using Matchstick when deploying your subgraph. Please only use Matchstick for local testing, and remove/comment out this line (`export { runTests } from "../tests/gravity.test.ts"`) once you're done. We expect to resolve this issue shortly, sorry for the inconvenience!_ -_Si no eliminas esa línea, obtendrás el siguiente mensaje de error al intentar desplegar tu subgrafo:_ +_If you don't remove that line, you will get the following error message when attempting to deploy your subgraph:_ ``` /... @@ -123,28 +123,28 @@ Mapping terminated before handling trigger: oneshot canceled .../ ``` -Ahora, para ejecutar nuestras pruebas, sólo tienes que ejecutar lo siguiente en la carpeta raíz de tu subgrafo: +Now in order to run our tests you simply need to run the following in your subgraph root folder: `graph test Gravity` -Y si todo va bien deberías ser recibido con lo siguiente: +And if all goes well you should be greeted with the following: -![Matchstick diciendo "¡Todas las pruebas superadas!"](/img/matchstick-tests-passed.png) +![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) -## Escenarios de prueba comunes +## Common test scenarios -### Hidratar la tienda con un cierto estado +### Hydrating the store with a certain state -Los usuarios pueden hidratar la tienda con un conjunto conocido de entidades. Aquí hay un ejemplo para inicializar la tienda con una entidad Gravatar: +Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: ```typescript let gravatar = new Gravatar('entryId') gravatar.save() ``` -### Llamada a una función de mapeo con un evento +### Calling a mapping function with an event -Un usuario puede crear un evento personalizado y pasarlo a una función de mapeo que está vinculada a la tienda: +A user can create a custom event and pass it to a mapping function that is bound to the store: ```typescript import { store } from 'matchstick-as/assembly/store' @@ -156,9 +156,9 @@ let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01 handleNewGravatar(newGravatarEvent) ``` -### Llamar a todos los mapeos con fixtures de eventos +### Calling all of the mappings with event fixtures -Los usuarios pueden llamar a los mapeos con fixtures de prueba. +Users can call the mappings with test fixtures. ```typescript import { NewGravatar } from '../../generated/Gravity/Gravity' @@ -180,9 +180,9 @@ export function handleNewGravatars(events: NewGravatar[]): void { } ``` -### Simular llamadas de contratos +### Mocking contract calls -Los usuarios pueden simular las llamadas de los contratos: +Users can mock contract calls: ```typescript import { addMetadata, assert, createMockedFunction, clearStore, test } from 'matchstick-as/assembly/index' @@ -202,9 +202,9 @@ let result = gravity.gravatarToOwner(bigIntParam) assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum.Value.fromAddress(result)) ``` -Como se ha demostrado, para simular (mock) una llamada a un contrato y endurecer un valor de retorno, el usuario debe proporcionar una dirección de contrato, el nombre de la función, la firma de la función, una array de argumentos y, por supuesto, el valor de retorno. +As demonstrated, in order to mock a contract call and hardcore a return value, the user must provide a contract address, function name, function signature, an array of arguments, and of course - the return value. -Los usuarios también pueden simular las reversiones de funciones: +Users can also mock function reverts: ```typescript let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') @@ -213,9 +213,9 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri .reverts() ``` -### Afirmar el estado del almacén +### Asserting the state of the store -Los usuarios pueden hacer una aserción al estado final (o intermedio) del almacén a través de entidades de aserción. Para ello, el usuario tiene que suministrar un tipo de Entidad, el ID específico de una Entidad, el nombre de un campo en esa Entidad y el valor esperado del campo. Aquí hay un ejemplo rápido: +Users are able to assert the final (or midway) state of the store through asserting entities. In order to do this, the user has to supply an Entity type, the specific ID of an Entity, a name of a field on that Entity, and the expected value of the field. Here's a quick example: ```typescript import { assert } from 'matchstick-as/assembly/index' @@ -227,11 +227,11 @@ gravatar.save() assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') ``` -Al ejecutar la función assert.fieldEquals() se comprobará la igualdad del campo dado con el valor esperado dado. La prueba fallará y se emitirá un mensaje de error si los valores son **NO** iguales. En caso contrario, la prueba pasará con éxito. +Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. -### Interacción con los metadatos de los Eventos +### Interacting with Event metadata -Los usuarios pueden utilizar los metadatos de la transacción por defecto, que podrían ser devueltos como un ethereum.Event utilizando la función `newMockEvent()`. El siguiente ejemplo muestra cómo se puede leer/escribir en esos campos del objeto Evento: +Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: ```typescript // Read @@ -242,26 +242,26 @@ let UPDATED_ADDRESS = '0xB16081F360e3847006dB660bae1c6d1b2e17eC2A' newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) ``` -### Afirmar la igualdad de las variables +### Asserting variable equality ```typescript assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); ``` -### Afirmar que una Entidad es **no** en el almacén +### Asserting that an Entity is **not** in the store -Los usuarios pueden afirmar que una entidad no existe en el almacén. La función toma un tipo de entidad y un id. Si la entidad está de hecho en el almacén, la prueba fallará con un mensaje de error relevante. Aquí hay un ejemplo rápido de cómo utilizar esta funcionalidad: +Users can assert that an entity does not exist in the store. The function takes an entity type and an id. If the entity is in fact in the store, the test will fail with a relevant error message. Here's a quick example of how to use this functionality: ```typescript assert.notInStore('Gravatar', '23') ``` -### Duración del tiempo de ejecución de la prueba en la salida del registro +### Test run time duration in the log output -La salida del registro incluye la duración de la prueba. Aquí hay un ejemplo: +The log output includes the test run duration. Here's an example: `Jul 09 14:54:42.420 INFO Program execution time: 10.06022ms` -## Comentarios +## Feedback -Si tienes alguna pregunta, comentario, petición de características o simplemente quieres ponerte en contacto, el mejor lugar sería The Graph Discord, donde tenemos un canal dedicado a Matchstick, llamado 🔥| unit-testing. +If you have any questions, feedback, feature requests or just want to reach out, the best place would be The Graph Discord where we have a dedicated channel for Matchstick, called 🔥| unit-testing. From 6238d9dca0e45965cfc6e15c1cadacef8d9cdbbf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:29 -0500 Subject: [PATCH 048/241] New translations query-the-graph.mdx (Japanese) --- pages/ja/developer/query-the-graph.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/ja/developer/query-the-graph.mdx b/pages/ja/developer/query-the-graph.mdx index 5be6824eaafa..ae480b1e6883 100644 --- a/pages/ja/developer/query-the-graph.mdx +++ b/pages/ja/developer/query-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: グラフのクエリ +title: Query The Graph --- -サブグラフがデプロイされた状態で、[Graph Explorer](https://thegraph.com/explorer)にアクセスすると、[GraphiQL](https://github.com/graphql/graphiql)インターフェースが表示され、サブグラフにデプロイされた GraphQL API を探索して、クエリを発行したり、スキーマを表示したりすることができます。 +With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -以下に例を示しますが、サブグラフのエンティティへのクエリの方法については、[Query API](/developer/graphql-api)を参照してください。 +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. -#### 例 +#### Example -このクエリは、マッピングが作成したすべてのカウンターを一覧表示します。 作成するのは 1 つだけなので、結果には 1 つの`デフォルトカウンター +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { @@ -19,14 +19,14 @@ title: グラフのクエリ } ``` -## グラフエクスプローラの利用 +## Using The Graph Explorer -分散型グラフエクスプローラに公開されているサブグラフには、それぞれ固有のクエリ URL が設定されており、サブグラフの詳細ページに移動し、右上の「クエリ」ボタンをクリックすることで確認できます。 これは、サブグラフの詳細ページに移動し、右上の「クエリ」ボタンをクリックすると、サブグラフの固有のクエリ URL と、そのクエリの方法を示すサイドペインが表示されます。 +Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. ![Query Subgraph Pane](/img/query-subgraph-pane.png) -お気づきのように、このクエリ URL には固有の API キーを使用する必要があります。 API キーの作成と管理は、[Subgraph Studio](https://thegraph.com/studio)の「API Keys」セクションで行うことができます。 Subgraph Studio の使用方法については、[こちら](/studio/subgraph-studio)をご覧ください。 +As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). -API キーを使用してサブグラフをクエリすると、GRT で支払われるクエリ料金が発生します。 課金については[こちら](/studio/billing)をご覧ください。 +Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). -また、「プレイグラウンド」タブの GraphQL プレイグラウンドを使用して、The Graph Explorer 内のサブグラフに問い合わせを行うことができます。 +You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. From 35cd593184304b4ffc18c5a781b54f618b356309 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:31 -0500 Subject: [PATCH 049/241] New translations publish-subgraph.mdx (Spanish) --- pages/es/developer/publish-subgraph.mdx | 26 ++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/es/developer/publish-subgraph.mdx b/pages/es/developer/publish-subgraph.mdx index 2d0a971c4286..2f35f5eb1bae 100644 --- a/pages/es/developer/publish-subgraph.mdx +++ b/pages/es/developer/publish-subgraph.mdx @@ -1,27 +1,27 @@ --- -title: Publicar un Subgrafo en la Red Descentralizada +title: Publish a Subgraph to the Decentralized Network --- -Una vez que tu subgrafo ha sido [desplegado en el Subgraph Studio](/studio/deploy-subgraph-studio), lo has probado y estás listo para ponerlo en producción, puedes publicarlo en la red descentralizada. +Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. -La publicación de un Subgrafo en la red descentralizada hace que esté disponible para que los [curadores](/curating) comiencen a curar en él, y para que los [indexadores](/indexing) comiencen a indexarlo. +Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. -Para ver un tutorial sobre cómo publicar un subgrafo en la red descentralizada, consulta [este video](https://youtu.be/HfDgC2oNnwo?t=580). +For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). -### Redes +### Networks -La red descentralizada admite actualmente tanto Rinkeby como Ethereum Mainnet. +The decentralized network currently supports both Rinkeby and Ethereum Mainnet. -### Publicar un subgrafo +### Publishing a subgraph -Los subgrafos se pueden publicar en la red descentralizada directamente desde el panel de control de Subgraph Studio haciendo clic en el botón **Publish**. Una vez publicado un subgrafo, estará disponible para su visualización en The [Graph Explorer](https://thegraph.com/explorer/). +Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). -- Los subgrafos publicados en Rinkeby pueden indexar y consultar datos de la red Rinkeby o de la red principal de Ethereum. +- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. -- Los subgrafos publicados en la red principal (mainnet) de Ethereum sólo pueden indexar y consultar datos de la red principal de Ethereum, lo que significa que no se pueden publicar subgrafos en la red descentralizada principal que indexen y consulten datos de la red de prueba (testnet). +- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. -- Cuando se publica una nueva versión para un subgrafo existente se aplican las mismas reglas que las anteriores. +- When publishing a new version for an existing subgraph the same rules apply as above. -### Actualización de los metadatos de un subgrafo publicado +### Updating metadata for a published subgraph -Una vez que tu subgrafo ha sido publicado en la red descentralizada, puedes modificar los metadatos en cualquier momento haciendo la actualización en el panel de control de Subgraph Studio del subgrafo. Luego de guardar los cambios y publicar tus actualizaciones en la red, éstas se reflejarán en The Graph Explorer. Esto no creará una nueva versión, ya que tu despliegue no ha cambiado. +Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. From ee4bceac1822d4122834fdc181c346407e3f6869 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:32 -0500 Subject: [PATCH 050/241] New translations publish-subgraph.mdx (Arabic) --- pages/ar/developer/publish-subgraph.mdx | 26 ++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/ar/developer/publish-subgraph.mdx b/pages/ar/developer/publish-subgraph.mdx index 3d51eccafeed..2f35f5eb1bae 100644 --- a/pages/ar/developer/publish-subgraph.mdx +++ b/pages/ar/developer/publish-subgraph.mdx @@ -1,27 +1,27 @@ --- -title: نشر Subgraph للشبكة اللامركزية +title: Publish a Subgraph to the Decentralized Network --- -بمجرد أن الـ subgraph الخاص بك [قد تم نشره لـ Subgraph Studio](/studio/deploy-subgraph-studio) ، وقمت باختباره ، وأصبحت جاهزا لوضعه في الإنتاج ، يمكنك بعد ذلك نشره للشبكة اللامركزية. +Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. -يؤدي نشر Subgraph على الشبكة اللامركزية إلى الإتاحة [ للمنسقين ](/curating) لبدء التنسيق، و [ للمفهرسين](/indexing) لبدء الفهرسة. +Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. -للحصول على إرشادات حول كيفية نشر subgraph على الشبكة اللامركزية ، راجع [ هذا الفيديو ](https://youtu.be/HfDgC2oNnwo؟t=580). +For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). -### الشبكات +### Networks -تدعم الشبكة اللامركزية حاليا كلا من Rinkeby و Ethereum Mainnet. +The decentralized network currently supports both Rinkeby and Ethereum Mainnet. -### نشر subgraph +### Publishing a subgraph -يمكن نشر الـ Subgraphs على الشبكة اللامركزية مباشرة من Subgraph Studio dashboard بالنقر فوق الزر ** Publish **. بمجرد نشر الـ subgraph ، فإنه سيكون متاحا للعرض في [ Graph Explorer ](https://thegraph.com/explorer/). +Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). -- يمكن لـ Subgraphs المنشور على Rinkeby فهرسة البيانات والاستعلام عنها من شبكة Rinkeby أو Ethereum Mainnet. +- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. -- يمكن لـ Subgraphs المنشور على Ethereum Mainnet فقط فهرسة البيانات والاستعلام عنها من Ethereum Mainnet ، مما يعني أنه لا يمكنك نشر الـ subgraphs على الشبكة اللامركزية الرئيسية التي تقوم بفهرسة بيانات testnet والاستعلام عنها. +- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. -- عند نشر نسخة جديدة لـ subgraph حالي ، تنطبق عليه نفس القواعد أعلاه. +- When publishing a new version for an existing subgraph the same rules apply as above. -### تحديث بيانات الـ subgraph المنشور +### Updating metadata for a published subgraph -بمجرد نشر الـ subgraph الخاص بك على الشبكة اللامركزية ، يمكنك تعديل البيانات الوصفية في أي وقت عن طريق إجراء التحديث في Subgraph Studio dashboard لـ subgraph. بعد حفظ التغييرات ونشر تحديثاتك على الشبكة ، ستنعكس في the Graph Explorer. لن يؤدي هذا إلى إنشاء إصدار جديد ، لأن النشر الخاص بك لم يتغير. +Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. From 6247598fe9e56f97239661b92c8214bde01fc8bb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:33 -0500 Subject: [PATCH 051/241] New translations publish-subgraph.mdx (Japanese) --- pages/ja/developer/publish-subgraph.mdx | 26 ++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/ja/developer/publish-subgraph.mdx b/pages/ja/developer/publish-subgraph.mdx index e2458c5412d8..2f35f5eb1bae 100644 --- a/pages/ja/developer/publish-subgraph.mdx +++ b/pages/ja/developer/publish-subgraph.mdx @@ -1,27 +1,27 @@ --- -title: 分散型ネットワークへのサブグラフの公開 +title: Publish a Subgraph to the Decentralized Network --- -サブグラフが [Subgraph Studioにデプロイ](/studio/deploy-subgraph-studio)され、それをテストし、本番の準備ができたら、分散型ネットワークにパブリッシュすることができます。 +Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. -サブグラフを分散型ネットワークに公開すると、[キュレーター](/curating)がキュレーションを開始したり、[インデクサー](/indexing)がインデックスを作成したりできるようになります。 +Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. -分散型ネットワークにサブグラフを公開する方法については、[こちらのビデオ](https://youtu.be/HfDgC2oNnwo?t=580)をご覧ください。 +For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). -### ネットワーク +### Networks -分散型ネットワークは現在、RinkebyとEthereum Mainnetの両方をサポートしています。 +The decentralized network currently supports both Rinkeby and Ethereum Mainnet. -### サブグラフの公開 +### Publishing a subgraph -サブグラフは、Subgraph Studioのダッシュボードから**Publish** ボタンをクリックすることで、直接分散型ネットワークに公開することができます。 サブグラフが公開されると、[Graph Explorer](https://thegraph.com/explorer/)で閲覧できるようになります。 +Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). -- Rinkebyに公開されたサブグラフは、RinkebyネットワークまたはEthereum Mainnetのいずれかからデータをインデックス化してクエリすることができます。 +- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. -- Ethereum Mainnetに公開されたサブグラフは、Ethereum Mainnetのデータのみをインデックス化してクエリすることができます。つまり、テストネットのデータをインデックス化して照会するサブグラフをメインの分散型ネットワークに公開することはできません。 +- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. -- 既存のサブグラフの新バージョンを公開する場合は、上記と同じルールが適用されます。 +- When publishing a new version for an existing subgraph the same rules apply as above. -### 公開されたサブグラフのメタデータの更新 +### Updating metadata for a published subgraph -サブグラフが分散型ネットワークに公開されると、サブグラフのSubgraph Studioダッシュボードで更新を行うことにより、いつでもメタデータを変更することができます。 変更を保存し、更新内容をネットワークに公開すると、グラフエクスプローラーに反映されます。 デプロイメントが変更されていないため、新しいバージョンは作成されません。 +Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. From 065243ac7596010eb9f53e9f38804e443d7a502a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:36 -0500 Subject: [PATCH 052/241] New translations query-the-graph.mdx (Spanish) --- pages/es/developer/query-the-graph.mdx | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/es/developer/query-the-graph.mdx b/pages/es/developer/query-the-graph.mdx index f21700f082b8..ae480b1e6883 100644 --- a/pages/es/developer/query-the-graph.mdx +++ b/pages/es/developer/query-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Consultar The Graph +title: Query The Graph --- -Con el subgrafo desplegado, visita el [Graph Explorer](https://thegraph.com/explorer) para abrir una [interfaz GraphQL](https://github.com/graphql/graphiql) en la que podrás explorar la API GraphQL desplegada para el subgrafo emitiendo consultas y viendo el esquema. +With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -A continuación se proporciona un ejemplo, pero por favor, consulta la [Query API](/developer/graphql-api) para obtener una referencia completa sobre cómo consultar las entidades del subgrafo. +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. -#### Ejemplo +#### Example -Estas listas de consultas muestran todos los contadores que nuestro mapeo ha creado. Como sólo creamos uno, el resultado sólo contendrá nuestro único `default-counter`: +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { @@ -19,14 +19,14 @@ Estas listas de consultas muestran todos los contadores que nuestro mapeo ha cre } ``` -## Uso de The Graph Explorer +## Using The Graph Explorer -Cada subgrafo publicado en The Graph Explorer descentralizado tiene una URL de consulta única que puedes encontrar navegando a la página de detalles del subgrafo y haciendo clic en el botón "Query (Consulta)" en la esquina superior derecha. Esto abrirá un panel lateral que te dará la URL de consulta única del subgrafo, así como algunas instrucciones sobre cómo consultarlo. +Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. -![Panel de Consulta de Subgrafos](/img/query-subgraph-pane.png) +![Query Subgraph Pane](/img/query-subgraph-pane.png) -Como puede observar, esta URL de consulta debe utilizar una clave de API única. Puedes crear y gestionar tus claves API en el [Subgraph Studio](https://thegraph.com/studio) en la sección "API Keys (Claves API)". Aprende a utilizar Subgraph Studio [aquí](/studio/subgraph-studio). +As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). -La consulta de subgrafos utilizando tus claves API generará tasas de consulta que se pagarán en GRT. Puedes obtener más información sobre la facturación [aquí](/studio/billing). +Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). -También puedes utilizar el playground GraphQL en la pestaña "Playground" para consultar un subgrafo dentro de The Graph Explorer. +You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. From 3f03bb79f89eb79895c8a1888b2d83049583f72c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:37 -0500 Subject: [PATCH 053/241] New translations query-the-graph.mdx (Arabic) --- pages/ar/developer/query-the-graph.mdx | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ar/developer/query-the-graph.mdx b/pages/ar/developer/query-the-graph.mdx index 776fbcb6bed1..ae480b1e6883 100644 --- a/pages/ar/developer/query-the-graph.mdx +++ b/pages/ar/developer/query-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: الاستعلام عن The Graph +title: Query The Graph --- -بالـ subgraph المنشور ، قم بزيارة [ Graph Explorer ](https://thegraph.com/explorer) لفتح واجهة [ GraphiQL ](https://github.com/graphql/graphiql) حيث يمكنك استكشاف GraphQL API المنشورة لـ subgraph عن طريق إصدار الاستعلامات وعرض المخطط. +With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -تم توفير المثال أدناه ، ولكن يرجى الاطلاع على [Query API](/developer/graphql-api) للحصول على مرجع كامل حول كيفية الاستعلام عن كيانات الـ subgraph. +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. -#### مثال +#### Example -يسرد هذا الاستعلام جميع العدادات التي أنشأها الـ mapping الخاص بنا. نظرا لأننا أنشأنا واحدا فقط ، فستحتوي النتيجة فقط على `default-counter`: +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { @@ -19,14 +19,14 @@ title: الاستعلام عن The Graph } ``` -## استخدام The Graph Explorer +## Using The Graph Explorer -يحتوي كل subgraph منشور على Graph Explorer اللامركزي على عنوان URL فريد للاستعلام والذي يمكنك العثور عليه بالانتقال إلى صفحة تفاصيل الـ subgraph والنقر على "Query" في الزاوية اليمنى العليا. سيؤدي هذا إلى فتح نافذة جانبية والتي تمنحك عنوان URL فريد للاستعلام لـ subgraph بالإضافة إلى بعض الإرشادات حول كيفية الاستعلام عنه. +Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. -![نافذة الاستعلام عن Subgraph](/img/query-subgraph-pane.png) +![Query Subgraph Pane](/img/query-subgraph-pane.png) -كما يمكنك أن تلاحظ ، أنه يجب أن يستخدم عنوان الاستعلام URL مفتاح API فريد. يمكنك إنشاء وإدارة مفاتيح API الخاصة بك في [ Subgraph Studio ](https://thegraph.com/studio) في قسم "API Keys". تعرف على المزيد حول كيفية استخدام Subgraph Studio [ هنا ](/studio/subgraph-studio). +As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). -سيؤدي الاستعلام عن الـ subgraphs باستخدام مفاتيح API إلى إنشاء رسوم الاستعلام التي سيتم دفعها كـ GRT. يمكنك معرفة المزيد حول الفوترة [ هنا ](/studio/billing). +Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). -يمكنك أيضا استخدام GraphQL playground في علامة التبويب "Playground" للاستعلام عن subgraph داخل The Graph Explorer. +You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. From d078359f1be19c612548680d79f00cfe74af9f1e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:38 -0500 Subject: [PATCH 054/241] New translations migrating-subgraph.mdx (Korean) --- pages/ko/hosted-service/migrating-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ko/hosted-service/migrating-subgraph.mdx b/pages/ko/hosted-service/migrating-subgraph.mdx index 260f084c0e7d..85f72f053b30 100644 --- a/pages/ko/hosted-service/migrating-subgraph.mdx +++ b/pages/ko/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## 소개 +## Introduction This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From 5878b14f497893a1c40e717bac9c997a3250e228 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:39 -0500 Subject: [PATCH 055/241] New translations migrating-subgraph.mdx (Chinese Simplified) --- pages/zh/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/hosted-service/migrating-subgraph.mdx b/pages/zh/hosted-service/migrating-subgraph.mdx index 979d684faeed..85f72f053b30 100644 --- a/pages/zh/hosted-service/migrating-subgraph.mdx +++ b/pages/zh/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## 介绍 +## Introduction This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. From e2dee83e7ccff966471570725188a482e58704c8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:44 -0500 Subject: [PATCH 056/241] New translations studio-faq.mdx (Spanish) --- pages/es/studio/studio-faq.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/es/studio/studio-faq.mdx b/pages/es/studio/studio-faq.mdx index 8ed8de7d106c..4db4d7ccddaa 100644 --- a/pages/es/studio/studio-faq.mdx +++ b/pages/es/studio/studio-faq.mdx @@ -1,21 +1,21 @@ --- -title: Preguntas Frecuentes sobre Subgraph Studio +title: Subgraph Studio FAQs --- -### 1. ¿Cómo puedo crear una clave API? +### 1. How do I create an API Key? -En Subgraph Studio, puedes crear las claves de la API que necesites y añadir configuraciones de seguridad a cada una de ellas. +In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. -### 2. ¿Puedo crear varias claves API? +### 2. Can I create multiple API Keys? -R: ¡Sí! Puedes crear varias claves API para utilizarlas en diferentes proyectos. Consulta el enlace [aquí](https://thegraph.com/studio/apikeys/). +A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). -### 3. ¿Cómo puedo restringir un dominio para una clave API? +### 3. How do I restrict a domain for an API Key? -Después de crear una Clave de API, en la sección Seguridad puedes definir los dominios que pueden consultar una Clave de API específica. +After creating an API Key, in the Security section you can define the domains that can query a specific API Key. -### 4. ¿Cómo puedo encontrar las URL de consulta de los subgrafos si no soy el desarrollador del subgrafo que quiero utilizar? +### 4. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? -Puedes encontrar la URL de consulta de cada subgrafo en la sección Detalles del Subgrafo de the Graph Explorer. Al hacer clic en el botón "Query", se te dirigirá a un panel en el que podrás ver la URL de consulta del subgrafo te interesa. A continuación, puedes sustituir el marcador de posición `` por la clave de la API que deseas aprovechar en el Subgraph Studio. +You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in the Subgraph Studio. -Recuerda que puedes crear una clave API y consultar cualquier subgrafo publicado en la red, incluso si tú mismo construyes un subgrafo. Estas consultas a través de la nueva clave API, son consultas pagas como cualquier otra en la red. +Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. From 6a4c619a3efdfacbb280663ccb42981e475cf417 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:45 -0500 Subject: [PATCH 057/241] New translations studio-faq.mdx (Arabic) --- pages/ar/studio/studio-faq.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/pages/ar/studio/studio-faq.mdx b/pages/ar/studio/studio-faq.mdx index 20b2ffb13a5e..4db4d7ccddaa 100644 --- a/pages/ar/studio/studio-faq.mdx +++ b/pages/ar/studio/studio-faq.mdx @@ -1,14 +1,14 @@ --- -title: الأسئلة الشائعة حول Subgraph Studio +title: Subgraph Studio FAQs --- -### 1. كيف يمكنني إنشاء مفتاح API؟ +### 1. How do I create an API Key? -في Subgraph Studio ، يمكنك إنشاء API Keys وذلك حسب الحاجة وإضافة إعدادات الأمان لكل منها. +In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. -### 2. هل يمكنني إنشاء أكثر من API Keys؟ +### 2. Can I create multiple API Keys? -A: نعم يمكنك إنشاء أكثر من API Keys وذلك لاستخدامها في مشاريع مختلفة. تحقق من الرابط [هنا](https://thegraph.com/studio/apikeys/). +A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). ### 3. How do I restrict a domain for an API Key? From ab408f4707436670a152ade7d6db0322555f9b72 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:47 -0500 Subject: [PATCH 058/241] New translations studio-faq.mdx (Chinese Simplified) --- pages/zh/studio/studio-faq.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/zh/studio/studio-faq.mdx b/pages/zh/studio/studio-faq.mdx index b5f894110682..4db4d7ccddaa 100644 --- a/pages/zh/studio/studio-faq.mdx +++ b/pages/zh/studio/studio-faq.mdx @@ -1,21 +1,21 @@ --- -title: 子图工作室常见问题 +title: Subgraph Studio FAQs --- -### 1. 我如何创建一个 API 密钥? +### 1. How do I create an API Key? -在 Subgraph Studio 中,你可以根据需要创建 API 密钥,并为每个密钥添加安全设置。 +In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. -### 2. 我可以创建多个 API 密钥吗? +### 2. Can I create multiple API Keys? -是的,可以。 你可以创建多个 API 密钥,在不同的项目中使用。 点击 [这里](https://thegraph.com/studio/apikeys/)查看。 +A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). -### 3. 我如何为 API 密钥限制一个域名? +### 3. How do I restrict a domain for an API Key? -创建了 API 密钥后,在安全部分,你可以定义可以查询特定 API 密钥的域。 +After creating an API Key, in the Security section you can define the domains that can query a specific API Key. -### 4. 如果我不是我想使用的子图的开发者,我怎样才能找到子图的查询 URL? +### 4. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? -你可以在 The Graph Explorer 的 Subgraph Details 部分找到每个子图的查询 URL。 当你点击 "查询 "按钮时,你将被引导到一个窗格,在这里你可以查看你感兴趣的子图的查询 URL。 然后你可以把 `api_key` 占位符替换成你想在 Subgraph Studio 中利用的 API 密钥。 +You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in the Subgraph Studio. -请记住,你可以创建一个 API 密钥并查询发布到网络上的任何子图,即使你自己建立了一个子图。 这些通过新的 API 密钥进行的查询,与网络上的任何其他查询一样,都是付费查询。 +Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. From 48d3e2319be0cca8ec2ca621ba06e0b106de36d6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:48 -0500 Subject: [PATCH 059/241] New translations subgraph-studio.mdx (Spanish) --- pages/es/studio/subgraph-studio.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/studio/subgraph-studio.mdx b/pages/es/studio/subgraph-studio.mdx index 28cfadea4edc..9af3926db3df 100644 --- a/pages/es/studio/subgraph-studio.mdx +++ b/pages/es/studio/subgraph-studio.mdx @@ -36,7 +36,7 @@ The best part! When you first create a subgraph, you’ll be directed to fill ou - Your Subgraph Name - Image -- Descripción +- Description - Categories - Website @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From 96ab0f083307030c25eaba89eee3e68a13b57f88 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:49 -0500 Subject: [PATCH 060/241] New translations multisig.mdx (Spanish) --- pages/es/studio/multisig.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/studio/multisig.mdx b/pages/es/studio/multisig.mdx index 7b0f55c22ffb..164835bdb8a4 100644 --- a/pages/es/studio/multisig.mdx +++ b/pages/es/studio/multisig.mdx @@ -4,7 +4,7 @@ title: Using a Multisig Wallet Subgraph Studio currently doesn't support signing with multisig wallets. Until then, you can follow this guide on how to publish your subgraph by invoking the [GNS contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol) functions. -### Crear un Subgrafo +### Create a Subgraph Similary to using a regular wallet, you can create a subgraph by connecting your non-multisig wallet in Subgraph Studio. Once you connect the wallet, simply create a new subgraph. Make sure you fill out all the details, such as subgraph name, description, image, website, and source code url if applicable. From 31f900213bfaf3db948d6399400d141b410f7337 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:50 -0500 Subject: [PATCH 061/241] New translations subgraph-studio.mdx (Arabic) --- pages/ar/studio/subgraph-studio.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/studio/subgraph-studio.mdx b/pages/ar/studio/subgraph-studio.mdx index d4e82eeef02e..9af3926db3df 100644 --- a/pages/ar/studio/subgraph-studio.mdx +++ b/pages/ar/studio/subgraph-studio.mdx @@ -36,7 +36,7 @@ The best part! When you first create a subgraph, you’ll be directed to fill ou - Your Subgraph Name - Image -- الوصف +- Description - Categories - Website @@ -47,7 +47,7 @@ The Graph Network is not yet able to support all of the data-sources & features - Index mainnet Ethereum - Must not use any of the following features: - ipfs.cat & ipfs.map - - أخطاء غير فادحة + - Non-fatal errors - Grafting More features & networks will be added to The Graph Network incrementally. @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From 002878af6dbb604f5babe26cda4ad2af62585516 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:51 -0500 Subject: [PATCH 062/241] New translations subgraph-studio.mdx (Japanese) --- pages/ja/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/studio/subgraph-studio.mdx b/pages/ja/studio/subgraph-studio.mdx index 1f5ecf6a7011..9af3926db3df 100644 --- a/pages/ja/studio/subgraph-studio.mdx +++ b/pages/ja/studio/subgraph-studio.mdx @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From 7e9866577a5d9ae75e59df6390cca88f2ffe86e5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:52 -0500 Subject: [PATCH 063/241] New translations subgraph-studio.mdx (Korean) --- pages/ko/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/studio/subgraph-studio.mdx b/pages/ko/studio/subgraph-studio.mdx index 562d588ef26d..9af3926db3df 100644 --- a/pages/ko/studio/subgraph-studio.mdx +++ b/pages/ko/studio/subgraph-studio.mdx @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From 05e195f7f4e69b9a7a4b215021c9a6f6fa77be52 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:54 -0500 Subject: [PATCH 064/241] New translations near.mdx (Spanish) --- pages/es/supported-networks/near.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/supported-networks/near.mdx b/pages/es/supported-networks/near.mdx index f86cb2b89c0d..288ac380494c 100644 --- a/pages/es/supported-networks/near.mdx +++ b/pages/es/supported-networks/near.mdx @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## Preguntas frecuentes +## FAQ ### How does the beta work? From 761b1db1b424485448b41d6917ac71619e84f92f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:55 -0500 Subject: [PATCH 065/241] New translations near.mdx (Arabic) --- pages/ar/supported-networks/near.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/supported-networks/near.mdx b/pages/ar/supported-networks/near.mdx index c364fd4ecf89..288ac380494c 100644 --- a/pages/ar/supported-networks/near.mdx +++ b/pages/ar/supported-networks/near.mdx @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## الأسئلة الشائعة +## FAQ ### How does the beta work? From e7f5ab5a7281b1b4806479f8b2500d124c0ff4c4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:56 -0500 Subject: [PATCH 066/241] New translations near.mdx (Japanese) --- pages/ja/supported-networks/near.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/supported-networks/near.mdx b/pages/ja/supported-networks/near.mdx index 0965bdee1675..288ac380494c 100644 --- a/pages/ja/supported-networks/near.mdx +++ b/pages/ja/supported-networks/near.mdx @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## よくある質問 +## FAQ ### How does the beta work? From 3ca823de5fd5b7678103a027694fa1bd2de2cecc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:58 -0500 Subject: [PATCH 067/241] New translations near.mdx (Chinese Simplified) --- pages/zh/supported-networks/near.mdx | 54 ++++++++++++++-------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/pages/zh/supported-networks/near.mdx b/pages/zh/supported-networks/near.mdx index e5980fba4e95..288ac380494c 100644 --- a/pages/zh/supported-networks/near.mdx +++ b/pages/zh/supported-networks/near.mdx @@ -1,56 +1,56 @@ --- -title: 在 NEAR 上构建子图 +title: Building Subgraphs on NEAR --- -> Graph节点和托管服务中对NEAR 的支持目前处于测试阶段:任何有关构建 NEAR 子图的任何问题,请联系 near@thegraph.com! +> NEAR support in Graph Node and on the Hosted Service is in beta: please contact near@thegraph.com with any questions about building NEAR subgraphs! -本指南介绍了如何在[NEAR区块链](https://docs.near.org/)上构建索引智能合约的子图。 +This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## NEAR是什么? +## What is NEAR? -[NEAR](https://near.org/) 是一个用于构建去中心化应用程序的智能合约平台。 请访问 [官方文档](https://docs.near.org/docs/concepts/new-to-near) 了解更多信息。 +[NEAR](https://near.org/) is a smart contract platform for building decentralised applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. -## NEAR子图是什么? +## What are NEAR subgraphs? -Graph 为开发人员提供了一种被称为子图的工具,利用这个工具,开发人员能够处理区块链事件,并通过 GraphQL API提供结果数据。 [Graph节点](https://github.com/graphprotocol/graph-node)现在能够处理 NEAR 事件,这意味着 NEAR 开发人员现在可以构建子图来索引他们的智能合约。 +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. -子图是基于事件的,这意味着子图可以侦听并处理链上事件。 NEAR 子图目前支持两种类型的处理程序: +Subgraphs are event-based, which means that they listen for and then process on-chain events. There are currently two types of handlers supported for NEAR subgraphs: -- 区块处理器: 这些处理程序在每个新区块上运行 -- 收据处理器: 每次在指定帐户上一个消息被执行时运行。 +- Block handlers: these are run on every new block +- Receipt handlers: run every time a message is executed at a specified account -[NEAR 文档中](https://docs.near.org/docs/concepts/transaction#receipt): +[From the NEAR documentation](https://docs.near.org/docs/concepts/transaction#receipt): -> Receipt是系统中唯一可操作的对象。 当我们在 NEAR 平台上谈论“处理交易”时,这最终意味着在某个时候“应用收据”。 +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. -## 构建NEAR子图 +## Building a NEAR Subgraph -`@graphprotocol/graph-cli`是一个用于构建和部署子图的命令行工具。 +`@graphprotocol/graph-cli` is a command line tool for building and deploying subgraphs. -`@graphprotocol/graph-ts` 是子图特定类型的库。 +`@graphprotocol/graph-ts` is a library of subgraph-specific types. -NEAR子图开发需要`0.23.0`以上版本的`graph-cli`,以及 `0.23.0`以上版本的`graph-ts`。 +NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> 构建 NEAR 子图与构建索引以太坊的子图非常相似。 +> Building a NEAR subgraph is very similar to building a subgraph which indexes Ethereum. -子图定义包括三个方面: +There are three aspects of subgraph definition: -**subgraph.yaml:** 子图清单,定义感兴趣的数据源以及如何处理它们。 NEAR 是一种全新`类型`数据源。 +**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** 一个模式文件,它定义为您的子图存储哪些数据,以及如何通过 GraphQL 查询它。 NEAR 子图的要求包含在 [现有文档](/developer/create-subgraph-hosted#the-graphql-schema)中。 +**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). -**AssemblyScript 映射:**将事件数据转换为模式文件中定义的实体的[AssemblyScript 代码](/developer/assemblyscript-api)。 NEAR 支持引入了 NEAR 特定的数据类型和新的JSON 解析功能。 +**AssemblyScript Mappings:** [AssemblyScript code](/developer/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. -在子图开发过程中,有两个关键命令: +During subgraph development there are two key commands: ```bash -$ graph codegen # 从清单中标识的模式文件生成类型 -$ graph build # 从 AssemblyScript 文件生成 Web Assembly,并在 /build 文件夹中准备所有子图文件 +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder ``` -### 子图清单定义 +### Subgraph Manifest Definition -子图清单(`subgraph.yaml`)标识子图的数据源、感兴趣的触发器以及响应这些触发器而运行的函数。 以下是一个NEAR 的子图清单的例子: +The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:: ```yaml specVersion: 0.0.2 @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## 常见问题 +## FAQ ### How does the beta work? From 998476483150f39f9b1347365b6f20812fda2ed1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:55:59 -0500 Subject: [PATCH 068/241] New translations multisig.mdx (Arabic) --- pages/ar/studio/multisig.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/studio/multisig.mdx b/pages/ar/studio/multisig.mdx index 555ba11f9da9..164835bdb8a4 100644 --- a/pages/ar/studio/multisig.mdx +++ b/pages/ar/studio/multisig.mdx @@ -4,7 +4,7 @@ title: Using a Multisig Wallet Subgraph Studio currently doesn't support signing with multisig wallets. Until then, you can follow this guide on how to publish your subgraph by invoking the [GNS contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol) functions. -### إنشاء الـ Subgraph +### Create a Subgraph Similary to using a regular wallet, you can create a subgraph by connecting your non-multisig wallet in Subgraph Studio. Once you connect the wallet, simply create a new subgraph. Make sure you fill out all the details, such as subgraph name, description, image, website, and source code url if applicable. From 20ad069ef1d504d05d9bbb6c016ae04a1b068719 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:02 -0500 Subject: [PATCH 069/241] New translations what-is-hosted-service.mdx (Chinese Simplified) --- pages/zh/hosted-service/what-is-hosted-service.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/zh/hosted-service/what-is-hosted-service.mdx b/pages/zh/hosted-service/what-is-hosted-service.mdx index 24d7068c1b44..7f604c8dc31a 100644 --- a/pages/zh/hosted-service/what-is-hosted-service.mdx +++ b/pages/zh/hosted-service/what-is-hosted-service.mdx @@ -1,8 +1,8 @@ --- -title: 什么是托管服务? +title: What is the Hosted Service? --- -本节将引导您将子图部署到 [托管服务](https://thegraph.com/hosted-service/) 提醒一下,托管服务不会很快关闭。 一旦去中心化网络达到托管服务相当的功能,我们将逐步取消托管服务。 您在托管服务上部署的子图在[此处](https://thegraph.com/hosted-service/)仍然可用。 +This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. @@ -42,9 +42,9 @@ graph init --from-example --product hosted-service / The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. -## 托管服务支持的网络 +## Supported Networks on the Hosted Service -请注意托管服务支持以下网络。 [Graph Explorer](https://thegraph.com/explorer)目前不支持以太坊主网(“主网”)之外的网络。 +Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) - `mainnet` - `kovan` From 671100b7a9cb761b670921c3d9bb21d538d0d5f9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:03 -0500 Subject: [PATCH 070/241] New translations query-hosted-service.mdx (Spanish) --- .../es/hosted-service/query-hosted-service.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/es/hosted-service/query-hosted-service.mdx b/pages/es/hosted-service/query-hosted-service.mdx index cdb6bf9f8135..731e3a3120b2 100644 --- a/pages/es/hosted-service/query-hosted-service.mdx +++ b/pages/es/hosted-service/query-hosted-service.mdx @@ -1,14 +1,14 @@ --- -title: Consultas en el Sistema Alojado +title: Query the Hosted Service --- -Con el subgrafo desplegado, visita el [Servicio alojado](https://thegraph.com/hosted-service/) para abrir una interfaz [GraphiQL](https://github.com/graphql/graphiql) donde puedes explorar la API GraphQL desplegada para el subgrafo emitiendo consultas y viendo el esquema. +With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -A continuación se proporciona un ejemplo, pero por favor, consulta la [Query API](/developer/graphql-api) para obtener una referencia completa sobre cómo consultar las entidades del subgrafo. +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. -#### Ejemplo +#### Example -Estas listas de consultas muestran todos los contadores que nuestro mapeo ha creado. Como sólo creamos uno, el resultado sólo contendrá nuestro único `default-counter`: +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { @@ -19,10 +19,10 @@ Estas listas de consultas muestran todos los contadores que nuestro mapeo ha cre } ``` -## Utilización del Servicio Alojado +## Using The Hosted Service -The Graph Explorer y su playground GraphQL es una forma útil de explorar y consultar los subgrafos desplegados en el Servicio Alojado. +The Graph Explorer and its GraphQL playground is a useful way to explore and query deployed subgraphs on the Hosted Service. -A continuación se detallan algunas de las principales características: +Some of the main features are detailed below: -![Explora el Playground](/img/explorer-playground.png) +![Explorer Playground](/img/explorer-playground.png) From cbd90c230efa9cde09b61c1df05cf03bdda6b286 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:04 -0500 Subject: [PATCH 071/241] New translations query-hosted-service.mdx (Arabic) --- pages/ar/hosted-service/query-hosted-service.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/hosted-service/query-hosted-service.mdx b/pages/ar/hosted-service/query-hosted-service.mdx index fd7de3b535a2..731e3a3120b2 100644 --- a/pages/ar/hosted-service/query-hosted-service.mdx +++ b/pages/ar/hosted-service/query-hosted-service.mdx @@ -4,11 +4,11 @@ title: Query the Hosted Service With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -تم توفير المثال أدناه ، ولكن يرجى الاطلاع على [Query API](/developer/graphql-api) للحصول على مرجع كامل حول كيفية الاستعلام عن كيانات الـ subgraph. +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. -#### مثال +#### Example -يسرد هذا الاستعلام جميع العدادات التي أنشأها الـ mapping الخاص بنا. نظرا لأننا أنشأنا واحدا فقط ، فستحتوي النتيجة فقط على `default-counter`: +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { From 19c0e4c4be63d0eae644cf5d5c568898467fb543 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:05 -0500 Subject: [PATCH 072/241] New translations query-hosted-service.mdx (Japanese) --- pages/ja/hosted-service/query-hosted-service.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ja/hosted-service/query-hosted-service.mdx b/pages/ja/hosted-service/query-hosted-service.mdx index 0fe2dbf03bb0..731e3a3120b2 100644 --- a/pages/ja/hosted-service/query-hosted-service.mdx +++ b/pages/ja/hosted-service/query-hosted-service.mdx @@ -4,11 +4,11 @@ title: Query the Hosted Service With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -以下に例を示しますが、サブグラフのエンティティへのクエリの方法については、[Query API](/developer/graphql-api)を参照してください。 +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. #### Example -このクエリは、マッピングが作成したすべてのカウンターを一覧表示します。 作成するのは 1 つだけなので、結果には 1 つの`デフォルトカウンター +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { From e76a7acaa2f0e41361418399e1834c48debac60d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:07 -0500 Subject: [PATCH 073/241] New translations query-hosted-service.mdx (Chinese Simplified) --- .../zh/hosted-service/query-hosted-service.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/hosted-service/query-hosted-service.mdx b/pages/zh/hosted-service/query-hosted-service.mdx index ad41c4bede90..731e3a3120b2 100644 --- a/pages/zh/hosted-service/query-hosted-service.mdx +++ b/pages/zh/hosted-service/query-hosted-service.mdx @@ -1,14 +1,14 @@ --- -title: 查询托管服务 +title: Query the Hosted Service --- -部署子图后,请访问[托管服务](https://thegraph.com/hosted-service/) 以打开 [GraphiQL](https://github.com/graphql/graphiql) 界面,您可以在其中通过发出查询和查看数据模式来探索已经部署的子图的 GraphQL API。 +With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -下面提供了一个示例,但请参阅 [查询 API ](/developer/graphql-api) 以获取有关如何查询子图实体的完整参考。 +An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. -#### 示例 +#### Example -此查询列出了我们的映射创建的所有计数器。 由于我们只创建一个,结果将只包含我们的一个 `默认计数器`: +This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: ```graphql { @@ -19,10 +19,10 @@ title: 查询托管服务 } ``` -## 使用托管服务 +## Using The Hosted Service -Graph Explorer 及其 GraphQL playground是探索和查询托管服务上部署的子图的有用方式。 +The Graph Explorer and its GraphQL playground is a useful way to explore and query deployed subgraphs on the Hosted Service. -下面详细介绍了一些主要功能: +Some of the main features are detailed below: -![探索Playground](/img/explorer-playground.png) +![Explorer Playground](/img/explorer-playground.png) From 388965377ebfc012301ffd45b568403784d69bf9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:08 -0500 Subject: [PATCH 074/241] New translations what-is-hosted-service.mdx (Spanish) --- .../hosted-service/what-is-hosted-service.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/es/hosted-service/what-is-hosted-service.mdx b/pages/es/hosted-service/what-is-hosted-service.mdx index 03b41d6578b5..7f604c8dc31a 100644 --- a/pages/es/hosted-service/what-is-hosted-service.mdx +++ b/pages/es/hosted-service/what-is-hosted-service.mdx @@ -1,20 +1,20 @@ --- -title: '¿Qué es el Servicio Alojado?' +title: What is the Hosted Service? --- -Esta sección te guiará a través del despliegue de un subgrafo en el Servicio Alojado, también conocido como [Servicio Alojado.](https://thegraph.com/hosted-service/) Como recordatorio, el Servicio Alojado no se cerrará pronto. El Servicio Alojado desaparecerá gradualmente cuando alcancemos la paridad de características con la red descentralizada. Tus subgrafos desplegados en el Servicio Alojado siguen disponibles [aquí.](https://thegraph.com/hosted-service/) +This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) -Si no tienes una cuenta en el Servicio Alojado, puedes registrarte con tu cuenta de Github. Una vez que te autentiques, puedes empezar a crear subgrafos a través de la interfaz de usuario y desplegarlos desde tu terminal. Graph Node admite varias redes de prueba de Ethereum (Rinkeby, Ropsten, Kovan) además de la red principal. +If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. -## Crear un Subgrafo +## Create a Subgraph -Primero sigue las instrucciones [aquí](/developer/define-subgraph-hosted) para instalar the Graph CLI. Crea un subgrafo pasando `graph init --product hosted service` +First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` -### De un Contrato Existente +### From an Existing Contract -Si ya tienes un contrato inteligente desplegado en la red principal de Ethereum o en una de las redes de prueba, el arranque de un nuevo subgrafo a partir de este contrato puede ser una buena manera de empezar a utilizar el Servicio Alojado. +If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from this contract can be a good way to get started on the Hosted Service. -Puedes utilizar este comando para crear un subgrafo que indexe todos los eventos de un contrato existente. Esto intentará obtener el contrato ABI de [Etherscan](https://etherscan.io/). +You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). ```sh graph init \ @@ -23,28 +23,28 @@ graph init \ / [] ``` -Además, puedes utilizar los siguientes argumentos opcionales. Si la ABI no puede ser obtenida de Etherscan, vuelve a solicitar una ruta de archivo local. Si falta algún argumento opcional en el comando, éste te lleva a través de un formulario interactivo. +Additionally, you can use the following optional arguments. If the ABI cannot be fetched from Etherscan, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form. ```sh --network \ --abi \ ``` -El ``en este caso es tu nombre de usuario u organización de github, `` es el nombre para tu subgrafo, y `` es el nombre opcional del directorio donde graph init pondrá el manifiesto del subgrafo de ejemplo. El `` es la dirección de tu contrato existente. `` es el nombre de la red Ethereum en la que está activo el contrato. `` es una ruta local a un archivo ABI del contrato. **Tanto --network como --abi son opcionales** +The `` in this case is your github user or organization name, `` is the name for your subgraph, and `` is the optional name of the directory where graph init will put the example subgraph manifest. The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** -### De un Subgrafo de Ejemplo +### From an Example Subgraph -El segundo modo que admite `graph init` es la creación de un nuevo proyecto a partir de un subgrafo de ejemplo. El siguiente comando lo hace: +The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: ``` graph init --from-example --product hosted-service / [] ``` -El subgrafo de ejemplo se basa en el contrato Gravity de Dani Grant que gestiona los avatares de los usuarios y emite `NewGravatar` o `UpdateGravatar` cada vez que se crean o actualizan los avatares. El subgrafo maneja estos eventos escribiendo entidades `Gravatar` en el almacén de the Graph Node y asegurándose de que éstas se actualicen según los eventos. Continúa con el [manifiesto del subgrafo](/developer/create-subgraph-hosted#the-subgraph-manifest) para entender mejor a qué eventos de tus contratos inteligentes hay que prestar atención, los mapeos y mucho más. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. -## Redes Admitidas en el Servicio Alojado +## Supported Networks on the Hosted Service -Ten en cuenta que las siguientes redes son admitidas en el Servicio Alojado. Las redes fuera de la red principal de Ethereum ('mainnet') no son actualmente admitidas en [The Graph Explorer.](https://thegraph.com/explorer) +Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) - `mainnet` - `kovan` From 0bdcabd74dcbfb5237de389c0c97147e5a37aa2a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:09 -0500 Subject: [PATCH 075/241] New translations what-is-hosted-service.mdx (Arabic) --- pages/ar/hosted-service/what-is-hosted-service.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/hosted-service/what-is-hosted-service.mdx b/pages/ar/hosted-service/what-is-hosted-service.mdx index 491b79119f4f..7f604c8dc31a 100644 --- a/pages/ar/hosted-service/what-is-hosted-service.mdx +++ b/pages/ar/hosted-service/what-is-hosted-service.mdx @@ -6,7 +6,7 @@ This section will walk you through deploying a subgraph to the Hosted Service, o If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. -## إنشاء الـ Subgraph +## Create a Subgraph First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` @@ -34,13 +34,13 @@ The `` in this case is your github user or organization name, `/ [] ``` -يعتمد مثال الـ subgraph على عقد Gravity بواسطة Dani Grant الذي يدير avatars للمستخدم ويصدر أحداث ` NewGravatar ` أو ` UpdateGravatar ` كلما تم إنشاء avatars أو تحديثها. يعالج الـ subgraph هذه الأحداث عن طريق كتابة كيانات ` Gravatar ` إلى مخزن Graph Node والتأكد من تحديثها وفقا للأحداث. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. +The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. ## Supported Networks on the Hosted Service From c10296ef63563c96fbee66e0e5b6bf49f8ba175b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:12 -0500 Subject: [PATCH 076/241] New translations deploy-subgraph-studio.mdx (Chinese Simplified) --- pages/zh/studio/deploy-subgraph-studio.mdx | 48 +++++++++++----------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/zh/studio/deploy-subgraph-studio.mdx b/pages/zh/studio/deploy-subgraph-studio.mdx index 62f614ab7d15..2155d8fe8976 100644 --- a/pages/zh/studio/deploy-subgraph-studio.mdx +++ b/pages/zh/studio/deploy-subgraph-studio.mdx @@ -1,68 +1,68 @@ --- -title: 将一个子图部署到子图工作室 +title: Deploy a Subgraph to the Subgraph Studio --- -将一个子图部署到子图工作室是非常简单的。 你可以通过以下步骤完成: +Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: -- 安装Graph CLI(同时使用yarn和npm)。 -- 在子图工作室中创建你的子图 -- 从CLI认证你的账户 -- 将一个子图部署到子图工作室 +- Install The Graph CLI (with both yarn and npm) +- Create your Subgraph in the Subgraph Studio +- Authenticate your account from the CLI +- Deploying a Subgraph to the Subgraph Studio -## 安装Graph CLI +## Installing Graph CLI -我们使用相同的CLI将子图部署到我们的 [托管服务](https://thegraph.com/hosted-service/) 和[Subgraph Studio](https://thegraph.com/studio/)中。 以下是安装graph-cli的命令。 这可以用npm或yarn来完成。 +We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. -**用yarn安装:** +**Install with yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**用npm安装:** +**Install with npm:** ```bash npm install -g @graphprotocol/graph-cli ``` -## 在子图工作室中创建你的子图 +## Create your Subgraph in Subgraph Studio -在部署你的实际子图之前,你需要在 [子图工作室](https://thegraph.com/studio/)中创建一个子图。 我们建议你阅读我们的[Studio文档](/studio/subgraph-studio)以了解更多这方面的信息。 +Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. -## 初始化你的子图 +## Initialize your Subgraph -一旦你的子图在子图工作室中被创建,你可以用这个命令初始化子图代码。 +Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: ```bash graph init --studio ``` -``值可以在Subgraph Studio中你的子图详情页上找到。 +The `` value can be found on your subgraph details page in Subgraph Studio: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -运行`graph init`后,你会被要求输入你想查询的合同地址、网络和abi。 这样做将在你的本地机器上生成一个新的文件夹,里面有一些基本代码,可以开始在你的子图上工作。 然后,你可以最终确定你的子图,以确保它按预期工作。 +After running `graph init`, you will be asked to input the contract address, network and abi that you want to query. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. -## Graph 认证 +## Graph Auth -在能够将你的子图部署到子图工作室之前,你需要在CLI中登录到你的账户。 要做到这一点,你将需要你的部署密钥,你可以在你的 "我的子图 "页面或子图的详细信息页面上找到。 +Before being able to deploy your subgraph to Subgraph Studio, you need to login to your account within the CLI. To do this, you will need your deploy key that you can find on your "My Subgraphs" page or on your subgraph details page. -以下是你需要使用的命令,以从CLI进行认证: +Here is the command that you need to use to authenticate from the CLI: ```bash graph auth --studio ``` -## 将一个子图部署到子图工作室 +## Deploying a Subgraph to Subgraph Studio -一旦你准备好了,你可以将你的子图部署到子图工作室。 这样做不会将你的子图发布到去中心化的网络中,它只会将它部署到你的Studio账户中,在那里你将能够测试它并更新元数据。 +Once you are ready, you can deploy your subgraph to Subgraph Studio. Doing this won't publish your subgraph to the decentralized network, it will only deploy it to your Studio account where you will be able to test it and update the metadata. -这里是你需要使用的CLI命令,以部署你的子图。 +Here is the CLI command that you need to use to deploy your subgraph. ```bash graph deploy --studio ``` -运行这个命令后,CLI会要求提供一个版本标签,你可以随意命名,你可以使用 `0.1`和 `0.2`这样的标签,或者也可以使用字母,如 `uniswap-v2-0.1` . 这些标签将在Graph Explorer中可见,并可由策展人用来决定是否要在这个版本上发出信号,所以要明智地选择它们。 +After running this command, the CLI will ask for a version label, you can name it however you want, you can use labels such as `0.1` and `0.2` or use letters as well such as `uniswap-v2-0.1` . Those labels will be visible in Graph Explorer and can be used by curators to decide if they want to signal on this version or not, so choose them wisely. -一旦部署完毕,你可以在子图工作室中使用控制面板测试你的子图,如果需要的话,可以部署另一个版本,更新元数据,当你准备好后,将你的子图发布到Graph Explorer。 +Once deployed, you can test your subgraph in Subgraph Studio using the playground, deploy another version if needed, update the metadata, and when you are ready, publish your subgraph to Graph Explorer. From 782195762aebd273823eed5f8344ee2ec9f7ac52 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:14 -0500 Subject: [PATCH 077/241] New translations billing.mdx (Spanish) --- pages/es/studio/billing.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/studio/billing.mdx b/pages/es/studio/billing.mdx index 9a9d4593cced..588cd2ed2f40 100644 --- a/pages/es/studio/billing.mdx +++ b/pages/es/studio/billing.mdx @@ -2,7 +2,7 @@ title: Billing on the Subgraph Studio --- -### Descripción +### Overview Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From 4a287ecd49c68154f33a6cccf557c4658d51929b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:15 -0500 Subject: [PATCH 078/241] New translations billing.mdx (Arabic) --- pages/ar/studio/billing.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ar/studio/billing.mdx b/pages/ar/studio/billing.mdx index 67a5a8c1420e..588cd2ed2f40 100644 --- a/pages/ar/studio/billing.mdx +++ b/pages/ar/studio/billing.mdx @@ -2,7 +2,7 @@ title: Billing on the Subgraph Studio --- -### نظره عامة +### Overview Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From 407c6f85f2437c14a680337a2c25ef61daaef4f9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:16 -0500 Subject: [PATCH 079/241] New translations billing.mdx (Japanese) --- pages/ja/studio/billing.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ja/studio/billing.mdx b/pages/ja/studio/billing.mdx index 7f23343baa17..588cd2ed2f40 100644 --- a/pages/ja/studio/billing.mdx +++ b/pages/ja/studio/billing.mdx @@ -2,7 +2,7 @@ title: Billing on the Subgraph Studio --- -### 概要 +### Overview Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From ec99bb85e16d6792c00efcbdb5a0da21cdfe93f4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:17 -0500 Subject: [PATCH 080/241] New translations billing.mdx (Korean) --- pages/ko/studio/billing.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ko/studio/billing.mdx b/pages/ko/studio/billing.mdx index 4788124913d9..588cd2ed2f40 100644 --- a/pages/ko/studio/billing.mdx +++ b/pages/ko/studio/billing.mdx @@ -2,7 +2,7 @@ title: Billing on the Subgraph Studio --- -### 개요 +### Overview Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From 29f959b1b3f3b0a2880a463374743539beb957f0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:18 -0500 Subject: [PATCH 081/241] New translations billing.mdx (Chinese Simplified) --- pages/zh/studio/billing.mdx | 64 ++++++++++++++++++------------------- 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/pages/zh/studio/billing.mdx b/pages/zh/studio/billing.mdx index ce99acd65775..588cd2ed2f40 100644 --- a/pages/zh/studio/billing.mdx +++ b/pages/zh/studio/billing.mdx @@ -1,43 +1,43 @@ --- -title: 子图工作室的计费 +title: Billing on the Subgraph Studio --- -### 概述 +### Overview -发票是客户所欠付款金额的报表,通常在系统中每周生成一次。 你需要根据你使用API密钥产生的查询费用来支付费用。 账单合同在[Polygon](https://polygon.technology/)网络上。 它将允许你: +Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: -- 添加和移除GRT -- 根据你向你的账户添加了多少GRT,你移除了多少,以及你的发票来跟踪你的余额。 -- 根据产生的查询费用自动结算付款 +- Add and remove GRT +- Keep track of your balances based on how much GRT you have added to your account, how much you have removed, and your invoices +- Automatically clear payments based on query fees generated -为了将GRT添加到你的账户中,你将需要通过以下步骤: +In order to add GRT to your account, you will need to go through the following steps: -1. 在您选择的交易所购买GRT和ETH -2. 将GRT和ETH发送到你的钱包里 -3. 使用用户界面桥接GRT到Polygon +1. Purchase GRT and ETH on an exchange of your choice +2. Send the GRT and ETH to your wallet +3. Bridge GRT to Polygon using the UI - a) 在你向Polygon桥发送任何数量的GRT后,你将在几分钟内收到0.001 Matic。 你可以在搜索栏中输入你的地址,在 [Polygonscan](https://polygonscan.com/)上跟踪交易情况。 + a) You will receive 0.001 Matic in a few minutes after you send any amount of GRT to the Polygon bridge. You can track the transaction on [Polygonscan](https://polygonscan.com/) by inputting your address into the search bar. -4. 在Polygon的计费合同中加入桥接的GRT。 计费合同地址是:[0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). +4. Add bridged GRT to the billing contract on Polygon. The billing contract address is: [0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). - a) 为了完成第4步,你需要将钱包中的网络切换到Polygon。 你可以通过连接你的钱包并点击[这里](https://chainlist.org/) 的 "选择Matic(Polygon)主网 "来添加Polygon的网络。一旦你添加了网络,在你的钱包里通过导航到右上角的网络图标来切换它。 在Metamask中,该网络被称为 **Matic Mainnnet.** + a) In order to complete step #4, you'll need to switch your network in your wallet to Polygon. You can add Polygon's network by connecting your wallet and clicking on "Choose Matic (Polygon) Mainnet" [here.](https://chainlist.org/) Once you've added the network, switch it over in your wallet by navigating to the network pill on the top right hand side corner. In Metamask, the network is called **Matic Mainnnet.** -在每个周末,如果你使用了你的API密钥,你将会收到一张基于你在这期间产生的查询费用的发票。 这张发票将用你余额中的GRT来支付。 查询量是由你拥有的API密钥来评估的。 你的余额将在费用提取后被更新。 +At the end of each week, if you used your API keys, you will receive an invoice based on the query fees you have generated during this period. This invoice will be paid using GRT available in your balance. Query volume is evaluated by the API keys you own. Your balance will be updated after fees are withdrawn. -#### 下面是你如何进行开票的过程: +#### Here’s how you go through the invoicing process: -你的发票可以有4种状态: +There are 4 states your invoice can be in: -1. 创建--你的发票刚刚创建,还没有被支付 -2. 已付 - 你的发票已成功支付 -3. 未支付 - 账单合同上你的余额中没有足够的GRT -4. 错误 - 处理付款时出现了错误 +1. Created - your invoice has just been created and not been paid yet +2. Paid - your invoice has been successfully paid +3. Unpaid - there is not enough GRT in your balance on the billing contract +4. Error - there is an error processing the payment -**更多信息见下图:** +**See the diagram below for more information:** ![Billing Flow](/img/billing-flow.png) -关于在Subgraph Studio上如何进行计费的快速演示,请看下面的视频。 +For a quick demo of how billing works on the Subgraph Studio, check out the video below:
-### 多重签名用户 +### Multisig Users -多重合约是只能存在于它们所创建的网络上的智能合约,所以如果你在以太坊主网上创建了一个--它将只存在于主网上。 由于我们的账单使用Polygon,如果你将GRT桥接到Polygon的多符号地址上,资金就会丢失。 +Multisigs are smart-contracts that can exist only on the network they have been created, so if you created one on Ethereum Mainnet - it will only exist on Mainnet. Since our billing uses Polygon, if you were to bridge GRT to the multisig address on Polygon the funds would be lost. -为了克服这个问题,我们创建了 [一个专门的工具](https://multisig-billing.thegraph.com/),它将帮助你用一个标准的钱包/EOA(一个由私钥控制的账户)在我们的计费合同上存入GRT(代表multisig)。 +To overcome this issue, we created [a dedicated tool](https://multisig-billing.thegraph.com/) that will help you deposit GRT on our billing contract (on behalf of the multisig) with a standard wallet / EOA (an account controlled by a private key). -你可以在这里访问我们的Multisig计费工具:https://multisig-billing.thegraph.com/ +You can access our Multisig Billing Tool here: https://multisig-billing.thegraph.com/ -这个工具将指导你完成以下步骤: +This tool will guide you to go through the following steps: -1. 连接你的标准钱包/EOA(这个钱包需要拥有一些ETH以及你要存入的GRT)。 -2. 桥GRT到Polygon。 在交易完成后,你需要等待7-8分钟,以便最终完成桥梁转移。 -3. 一旦你的GRT在你的Polygon余额中可用,你就可以把它们存入账单合同,同时在`Multisig地址栏` 中指定你要资助的multisig地址。 +1. Connect your standard wallet / EOA (this wallet needs to own some ETH as well as the GRT you want to deposit) +2. Bridge GRT to Polygon. You will have to wait 7-8 minutes after the transaction is complete for the bridge transfer to be finalized. +3. Once your GRT is available on your Polygon balance you can deposit them to the billing contract while specifying the multisig address you are funding in the `Multisig Address` field. -一旦存款交易得到确认,你就可以回到 [Subgraph Studio](https://thegraph.com/studio/),并与你的Gnosis Safe Multisig连接,以创建API密钥并使用它们来生成查询。 +Once the deposit transaction has been confirmed you can go back to [Subgraph Studio](https://thegraph.com/studio/) and connect with your Gnosis Safe Multisig to create API keys and use them to generate queries. -这些查询将产生发票,这些发票将使用multisig的账单余额自动支付。 +Those queries will generate invoices that will be paid automatically using the multisig’s billing balance. From d5735f7751dca7bcc4847a16f43fb39978f43337 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:19 -0500 Subject: [PATCH 082/241] New translations deploy-subgraph-studio.mdx (Spanish) --- pages/es/studio/deploy-subgraph-studio.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/es/studio/deploy-subgraph-studio.mdx b/pages/es/studio/deploy-subgraph-studio.mdx index 72ca3decc35b..2155d8fe8976 100644 --- a/pages/es/studio/deploy-subgraph-studio.mdx +++ b/pages/es/studio/deploy-subgraph-studio.mdx @@ -1,5 +1,5 @@ --- -title: Despliegue de un subgrafo en Subgraph Studio +title: Deploy a Subgraph to the Subgraph Studio --- Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: @@ -13,13 +13,13 @@ Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. -**Instalar con yarn:** +**Install with yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Instalar con npm:** +**Install with npm:** ```bash npm install -g @graphprotocol/graph-cli @@ -29,7 +29,7 @@ npm install -g @graphprotocol/graph-cli Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. -## Inicializa tu Subgrafo +## Initialize your Subgraph Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: From 6bf670d7780cceac27c2ac4f2c35d8bf48c2dc9a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:20 -0500 Subject: [PATCH 083/241] New translations deploy-subgraph-studio.mdx (Arabic) --- pages/ar/studio/deploy-subgraph-studio.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/studio/deploy-subgraph-studio.mdx b/pages/ar/studio/deploy-subgraph-studio.mdx index b9d406812541..2155d8fe8976 100644 --- a/pages/ar/studio/deploy-subgraph-studio.mdx +++ b/pages/ar/studio/deploy-subgraph-studio.mdx @@ -13,13 +13,13 @@ Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. -**التثبيت بواسطة yarn:** +**Install with yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**التثبيت بواسطة npm:** +**Install with npm:** ```bash npm install -g @graphprotocol/graph-cli @@ -29,7 +29,7 @@ npm install -g @graphprotocol/graph-cli Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. -## قم بتهيئة Subgraph الخاص بك +## Initialize your Subgraph Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: From 7ccd5ac9fef785a6db8f4f1a11d9241413029277 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:21 -0500 Subject: [PATCH 084/241] New translations deploy-subgraph-studio.mdx (Japanese) --- pages/ja/studio/deploy-subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/studio/deploy-subgraph-studio.mdx b/pages/ja/studio/deploy-subgraph-studio.mdx index 69b6786ebda4..2155d8fe8976 100644 --- a/pages/ja/studio/deploy-subgraph-studio.mdx +++ b/pages/ja/studio/deploy-subgraph-studio.mdx @@ -29,7 +29,7 @@ npm install -g @graphprotocol/graph-cli Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. -## サブグラフの初期化 +## Initialize your Subgraph Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: From e27d45135375bc816e67476a7f6c656605e5e99a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:24 -0500 Subject: [PATCH 085/241] New translations curating.mdx (Spanish) --- pages/es/curating.mdx | 104 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/pages/es/curating.mdx b/pages/es/curating.mdx index 425cb5608b6f..85cfcf091c87 100644 --- a/pages/es/curating.mdx +++ b/pages/es/curating.mdx @@ -2,102 +2,102 @@ title: curación --- -Los curadores son vitales para la economía descentralizada que conforma a The Graph. Ellos utilizan su conocimiento sobre el ecosistema Web3 para calificar y señalar los subgrafos que deben ser indexados en la red de The Graph. A través del explorador, los curadores pueden ver los datos de la red y tomar decisiones sobre la señalización. The Graph Network recompensa a los curadores que señalan subgrafos valiosos para la red ya que ganan una parte de las tarifas de consulta que generan los subgrafos. Los curadores están motivados económicamente a través de la señalización rápida de dichos subgrafos. Estas señales de los curadores son importantes para los Indexadores, quienes luego pueden procesar o indexar los datos de estos subgrafos señalados. +Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. -Al señalar, los curadores pueden decidir entre señalar en una versión específica del subgrafo o hacerlo usando la opción de auto migración. Cuando se señala mediante la auto migración, las acciones de un curador siempre se actualizarán a la última versión publicada por el desarrollador. Si, en cambio, decides señalar una versión específica, las acciones siempre permanecerán en esa versión específica. +When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. -Recuerda que la curación es riesgosa. Por favor, haz una investigación rigurosa para asegurarte de seleccionar los subgrafos en los que confiar. Crear un subgrafo no requiere permiso, por lo que las personas pueden crear subgrafos y llamarlos con el nombre que deseen. Para obtener más orientación sobre los riesgos de la curación, consulta la +Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) -## Curva de vinculación 101 +## Bonding Curve 101 -Primero, demos un paso atrás. Cada subgrafo tiene una curva de vinculación en la que se acuñan las acciones de curación, cuando un usuario agrega una señal ** a ** a la curva. La curva de vinculación de cada subgrafo es única. Las curvas de vinculación están diseñadas para que el precio tras acuñar (mintear) una participación dentro de la curación de un subgrafo aumente linealmente, sobre el número de participaciones acuñadas. +First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. -![Precio por acciones](/img/price-per-share.png) +![Price per shares](/img/price-per-share.png) -Como resultado, el precio aumenta linealmente, lo que significa que con el tiempo resultará más caro comprar una participación. A continuación, se muestra un ejemplo de lo que queremos decir; consulta la curva de vinculación a continuación: +As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: -![Curva de vinculación](/img/bonding-curve.png) +![Bonding curve](/img/bonding-curve.png) -Imagina que tenemos dos curadores que anclan participaciones dentro de un subgrafo: +Consider we have two curators that mint shares for a subgraph: -- El Curador A es el primero en señalar dentro del subgrafo. Al agregar 120.000 GRT en la curva, pueden acuñar 2000 participaciones. -- La señal del Curador B está en el subgrafo en algún momento posterior al primero. Para recibir la misma cantidad participativa que el Curador A, este deberá agregar 360.000 GRT en la curva. -- Dado que ambos curadores poseen la mitad participativa de dicha curación, recibirían una cantidad igual en las recompensas por ser curador. -- Si alguno de los curadores quemara sus 2000 participaciones, recibirían 360.000 GRT. -- El curador restante recibiría todas las recompensas en ese subgrafo. Si quemaran sus participaciones a fin de retirar sus GRT, recibirían 120.000 GRT. -- **TLDR:** El valor de las participaciones en GRT son determinadas por la curva de vinculación y suelen ser volátiles. Existe la posibilidad de incurrir en grandes pérdidas. La señalización temprana significa que ingresas menos GRT por cada acción. Profundizando un poco, esto significa que ganarás mas recompensas en GRT siendo el primer curador en ese subgrafo que los posteriores en llegar. +- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. +- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. +- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. +- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. +- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. +- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. -En general, una curva de vinculación es una curva matemática que define la relación entre la oferta de tokens y el precio de los activos. Siendo específicos en la curación de subgrafos, **el precio de cada participación del subgrafo aumenta con cada token invertido** y el **precio de cada participación disminuye con cada token vendido.** +In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** -En el caso de The Graph, se aprovecha [la implementación de una fórmula por parte de Bancor para la curva de vinculación](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). +In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. -## ¿Cómo señalar? +## How to Signal -Ahora que hemos abarcado los conceptos básicos sobre cómo funciona la curva de vinculación, vamos a enseñarte como señalar un subgrafo. Dentro de la pestaña Curador en el explorador de The Graph, los curadores podrán señalar y anular la señal en ciertos subgrafos basados en las estadísticas de la red. Para una descripción general paso a paso de cómo hacer esto en el explorador, [haz click aquí.](https://thegraph.com/docs/explorer) +Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) -Un curador puede optar por señalar una versión especifica de un subgrafo, o puede optar por que su señal migre automáticamente a la versión de producción mas reciente de ese subgrafo. Ambas son estrategias válidas y tienen sus pros y sus contras. +A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. -Señalar una versión específica es esencialmente útil cuando un subgrafo es usado por múltiples dApps. Una dApp podría necesitar una actualización periódica a fin de que el subgrafo tenga nuevas funciones. Otra dApp podría necesitar una versión de subgrafo mas antigua y bien probada. Luego de la curación inicial, se incurre en una tarifa estándar del 1%. +Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. -Hacer que tu señal migre automáticamente a la versión más reciente, puede ser muy bueno si buscas asegurar la mayor cantidad de tarifas por consultas. Cada vez que curas, se incurre en un impuesto de curación del 1%. Además, pagaras un impuesto de curación del 0.5% en cada migración. Se aconseja a los desarrolladores de subgrafos a qué no publiquen nuevas versiones con frecuencia; puesto que deberán pagar una tarifa de curación del 0.5% en todas las acciones de curación migradas automáticamente. +Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. -> **Nota**: La primer dirección en señalar un subgrafo específico, se considera el primer curador, y éste tendrá que hacer un trabajo mucho más intenso en cuánto al gas, a diferencia del resto de los curadores que vengan después de él, esto debido a que el primer curador comienza los tokens participativos de la curación, inicia la curva de vinculación y también transfiere los tokens dentro del proxy de The Graph. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. -## ¿Qué significa Señalar para The Graph Network? +## What does Signaling mean for The Graph Network? -Para que los consumidores finales puedan consultar un subgrafo, primero se debe indexar el subgrafo. La indexación es un proceso en el que los archivos, los datos y los metadatos se examinan, catalogan y luego se indexan para que los resultados se puedan encontrar más rápido. Para que se puedan buscar los datos de un subgrafo, es necesario que esté organizado. +For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. -Por lo tanto, si los Indexadores tuvieran que adivinar qué subgrafos deberían indexar, habría pocas posibilidades de que obtengan tarifas de consulta sólidas porque no tendrían forma de validar qué subgrafos son de buena calidad. Ingrese a la curación. +And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. -Los curadores hacen que la red The Graph sea eficiente y la señalización es el proceso que utilizan los curadores para que los Indexadores sepan que un subgrafo es bueno para indexar, donde los GRT son agregados a la curva de vinculación de un subgrafo. Los Indexadores pueden confiar intrínsecamente en la señal de un curador porque, al señalar, los curadores acuñan una acción de curación para el subgrafo, lo que les da derecho a una parte de las tarifas de consulta futuras que impulsa el subgrafo. La señal del curador se representa como un token ERC20 llamado Graph Curation Shares (GCS). Los curadores que quieran ganar más tarifas por consulta deberán anclar sus GRT a los subgrafos que predicen que generarán un fuerte flujo de tarifas dentro de la red. Los curadores también pueden ganar menos tarifas por consulta si eligen curar o señalar un subgrafo de baja calidad, ya que habrá menos consultas que procesar o menos Indexadores para procesar esas consultas. ¡Mira el siguiente diagrama! +Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! -![Diagrama de Señalización](/img/curator-signaling.png) +![Signaling diagram](/img/curator-signaling.png) -Los Indexadores pueden encontrar subgrafos para indexar en función de las señales de curación que ven en The Graph Explorer (captura de pantalla a continuación). +Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). -![Subgrafos del Explorador](/img/explorer-subgraphs.png) +![Explorer subgraphs](/img/explorer-subgraphs.png) -## Riesgos +## Risks -1. El mercado de consultas es inherentemente joven en The Graph y existe el riesgo de que su APY (Rentabilidad anualizada) sea más bajo de lo esperado debido a la dinámica del mercado que recién está empezando. -2. Cuando un curador ancla sus GRT en un subgrafo, deberá pagar un impuesto de curación equivalente al 1%. Esta tarifa se quema y el resto se deposita en el suministro de reserva de la curva de vinculación. -3. Cuando los curadores queman sus acciones para retirar los GRT, se reducirá la participación de GRT de las acciones restantes. Ten en cuenta que, en algunos casos, los curadores pueden decidir quemar sus acciones, **todas al mismo tiempo**. Esta situación puede ser común si un desarrollador de dApp deja de actualizar la aplicación, no sigue consultando su subgrafo o si falla el mismo. Como resultado, es posible que los curadores solo puedan retirar una fracción de sus GRT iniciales. Si buscas un rol dentro red que conlleve menos riesgos, consulta \[Delegators\] (https://thegraph.com/docs/delegating). -4. Un subgrafo puede fallar debido a un error. Un subgrafo fallido no acumula tarifas de consulta. Como resultado, tendrás que esperar hasta que el desarrollador corrija el error e implemente una nueva versión. - - Si estás suscrito a la versión más reciente de un subgrafo, tus acciones se migrarán automáticamente a esa nueva versión. Esto incurrirá en una tarifa de curación del 0.5%. - - Si has señalado en una versión de subgrafo específica y falla, tendrás que quemar manualmente tus acciones de curación. Ten en cuenta que puedes recibir más o menos GRT de los que depositaste inicialmente en la curva de curación, y esto es un riesgo que todo curador acepta al empezar. Luego podrás firmar la nueva versión del subgrafo, incurriendo así en un impuesto de curación equivalente al 1%. +1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). +4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. -## Preguntas frecuentes sobre Curación +## Curation FAQs -### 1. ¿Qué porcentaje obtienen los curadores de las comisiones por consulta? +### 1. What % of query fees do Curators earn? -Al señalar un subgrafo, ganarás parte de todas las tarifas de consulta que genera dicho subgrafo. El 10% de todas las tarifas de consulta va destinado a los Curadores y se distribuye proporcionalmente en base a la participación de cada uno. Este 10% está sujeto a gobernanza. +By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. -### 2. ¿Cómo decido qué subgrafos son de alta calidad para señalar? +### 2. How do I decide which subgraphs are high quality to signal on? -Encontrar subgrafos de alta calidad es una tarea compleja, pero se puede abordar de muchas formas diferentes. Como Curador, quieres buscar subgrafos confiables que impulsen el volumen de consultas. Un subgrafo confiable puede ser valioso si es completo, preciso y respalda las necesidades de dicha dApp. Es posible que un subgrafo con una arquitectura deficiente deba revisarse o volver a publicarse, y también puede terminar fallando. Es fundamental que los Curadores revisen la arquitectura o el código de un subgrafo para evaluar si un subgrafo es valioso. Como resultado: +Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- Los curadores pueden usar su conocimiento de una red para intentar predecir cómo un subgrafo puede generar un volumen de consultas mayor o menor a largo plazo -- Los Curadores también deben comprender las métricas que están disponibles a través de Graph Explorer. Las métricas como el volumen de consultas anteriores y quién es el desarrollador del subgrafo pueden ayudar a determinar si vale la pena señalar un subgrafo o no. +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. -### 3. ¿Cuál es el costo de actualizar un subgrafo? +### 3. What’s the cost of upgrading a subgraph? -La migración de tus acciones de curación a una nueva versión de subgrafo incurre en un impuesto de curación del 1%. Los Curadores pueden optar por suscribirse a la versión más reciente de un subgrafo. Cuando las acciones de los curadores se migran automáticamente a una nueva versión, los curadores también pagarán la mitad del impuesto de curación, es decir el 0.5%, porque la mejora de los subgrafos es una acción on-chain que requiere cubrir los costos del gas. +Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. -### 4. ¿Con qué frecuencia puedo actualizar mi subgrafo? +### 4. How often can I upgrade my subgraph? -Se sugiere que no actualices tus subgrafos con demasiada frecuencia. Consulta la pregunta anterior para obtener más detalles. +It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. -### 5. ¿Puedo vender mis acciones de curación? +### 5. Can I sell my curation shares? -Las participaciones de un curador no se pueden "comprar" o "vender" como otros tokens ERC20 con los que seguramente estás familiarizado. Solo pueden anclar (crearse) o quemarse (destruirse) a lo largo de la curva de vinculación de un subgrafo en particular. La cantidad de GRT necesaria para generar una nueva señal y la cantidad de GRT que recibes cuando quemas tu señal existente, está determinada por esa curva de vinculación. Como curador, debes saber que cuando quemas tus acciones de curación para retirar GRT, puedes terminar con más o incluso con menos GRT de los que depositaste en un inicio. +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -¿Sigues confundido? Te invitamos a echarle un vistazo a nuestra guía en un vídeo que aborda todo sobre la curación: +Still confused? Check out our Curation video guide below:
From 4a81b7cfdb109ee9da63ed15dd7b8f1e85188feb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:25 -0500 Subject: [PATCH 086/241] New translations global.json (Korean) --- pages/ko/global.json | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/pages/ko/global.json b/pages/ko/global.json index 7676db4cfe8a..39bf594287dc 100644 --- a/pages/ko/global.json +++ b/pages/ko/global.json @@ -1,5 +1,17 @@ { - "aboutTheGraph": "The Graph 소개", + "language": "Language", + "aboutTheGraph": "About The Graph", "developer": "개발자", - "supportedNetworks": "지원되는 네트워크" + "supportedNetworks": "Supported Networks", + "collapse": "Collapse", + "expand": "Expand", + "previous": "Previous", + "next": "Next", + "editPage": "Edit page", + "pageSections": "Page Sections", + "linkToThisSection": "Link to this section", + "technicalLevelRequired": "Technical Level Required", + "notFoundTitle": "Oops! This page was lost in space...", + "notFoundSubtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", + "goHome": "Go Home" } From 9ef6a5086a1cb1a19142b40bb939f1acb9712810 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:27 -0500 Subject: [PATCH 087/241] New translations indexing.mdx (Korean) --- pages/ko/indexing.mdx | 350 +++++++++++++++++++++--------------------- 1 file changed, 175 insertions(+), 175 deletions(-) diff --git a/pages/ko/indexing.mdx b/pages/ko/indexing.mdx index 7485645acff9..ae7c24151872 100644 --- a/pages/ko/indexing.mdx +++ b/pages/ko/indexing.mdx @@ -4,47 +4,47 @@ title: 인덱싱(indexing) import { Difficulty } from '@/components' -인덱서는 인덱싱 및 쿼리 프로세싱 서비스를 제공하기 위해 더그래프 네트워크 상에 그래프 토큰(GRT)을 스테이킹하는 노드 운용자들입니다. 인덱서는 그들의 서비스에 대한 쿼리 수수료 및 인덱싱 보상을 얻습니다. 더불어, 그들은 Cobbs-Douglas 리베이트 기능에 따라, 그들의 업무에 비례하여 모든 네트워크 기여자와 함께 공유되는 리베이트 풀로부터 발생하는 수익 또한 얻습니다. +Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. -프로토콜에 스테이킹된 GRT는 해빙 기간이 적용되며, 인덱서가 악의적으로 응용 프로그램에 잘못된 데이터를 제공하거나 잘못된 인덱싱을 시행하는 경우 슬래싱(삭감) 패널티를 받을 수 있습니다. 또한, 인덱서들은 네트워크에 기여하기 위해서 위임자(Delegator)들로 부터 지분을 위임받을 수도 있습니다. +GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. -인덱서들은 서브그래프의 큐레이션 신호에 따라 인덱싱할 서브그래프를 선택합니다. 여기서 큐레이터는 어느 서브그래프가 고품질인지, 혹은 우선 순위여야 하는지를 표시하기 위해 GRT를 스테이킹합니다. 소비자(예: 애플리케이션)들은 어느 인덱서가 그들의 서브그래프에 대해 쿼리를 처리하게 할 것인지에 대한 매개 변수 및 쿼리 수수료 가격에 대한 선호 내역을 설정할 수도 있습니다. +Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. ## FAQ -### 네트워크 상의 인덱서가 되기 위해서 필요한 최소 스테이킹 요구사항은 어떻게 되나요? +### What is the minimum stake required to be an indexer on the network? -인덱서가 되기 위한 최소 스테이킹 수량은 현재 10만 GRT로 설정되어 있습니다. +The minimum stake for an indexer is currently set to 100K GRT. -### 인덱서는 어떻게 수익을 창출하나요? +### What are the revenue streams for an indexer? -**Query fee rebates** - 네트워크상에 쿼리를 제공함으로써 발생하는 지불입니다. 이러한 지불은 인덱서와 게이트웨이 간의 상태 채널을 통해 중재됩니다. 게이트웨이의 각 쿼리 요청에는 결제 및 쿼리 결과 유효성에 대한 해당 응답이 포함됩니다. +**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - 연간 3%의 프로토콜 전체 인플레이션을 통해 생성되는 인덱싱 보상은 네트워크에 대한 서브그래프 배포를 인덱싱하는 인덱서에게 배포됩니다. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. -### 보상은 어떻게 분배되나요? +### How are rewards distributed? -인덱싱 보상은 연간 발행량의 3%로 설정된 프로토콜 인플레이션에서 비롯됩니다. 이러한 보상들은 각각에 대한 모든 큐레이션 신호의 비율에 따라 서브그래프들에 배포된 다음 해당 서브그래프에 할당된 지분에 기반하여 인덱서들에게 비례적으로 분배됩니다. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -보상을 계산하기 위한 수많은 도구들이 커뮤니티에 의해서 생성되었습니다. 여러분들은 [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c)에서 이러한 도구 컬렉션들을 찾으실 수 있습니다. 또한 여러분들은 [Discord](https://discord.gg/vtvv7FP) 의 #delegators 및 #indexers 채널에서 최신 도구 리스트를 찾으실 수 있습니다. +Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). -### 인덱싱 증명(POI)이란 무엇인가요? +### What is a proof of indexing (POI)? -POI는 네트워크상에서 인덱서가 그들에게 할당된 서브그래프를 인덱싱 하고 있는지 확인하는 데 사용됩니다. 해당 할당이 적절하게 인덱싱 보상을 받을 수 있도록 하기 위하여, 할당을 마감할 당시 현재 에폭의 첫 번째 블록에 대한 POI가 제출되어야합니다. 블록에 대한 POI는 해당 블록까지의 특정 서브그래프 배포에 대한 모든 엔티티 저장소 트랜잭션에 대한 요약입니다. +POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. -### 인덱싱 보상은 언제 분배되나요? +### When are indexing rewards distributed? -할당은 활성 상태인 동안에 지속적으로 보상을 누적합니다. 보상들은 인덱서들에 의해 수집되며, 그들의 할당들이 마감될 때 마다 분배됩니다. 이는 인덱서가 강제로 종료하길 원할 때마다 수동으로 발생하거나 28 에폭 후에 위임자(Delegator)가 인덱서 할당을 닫을 수 있지만, 이러한 경우에는 결과적으로 보상이 생성되지 않습니다. 28 에폭은 최대 할당 수명입니다. (현재 한 에폭은 최대 24시간 지속됩니다.) +Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). -### 보류중인 인덱서 보상은 모니터링 가능한가요? +### Can pending indexer rewards be monitored? -다양한 커뮤니티에 의해 제작된 대시보드들에는 보류중인 보상 가치를 포함하고 있으며, 이들은 다음과 같은 절차들을 통해 수동으로 손쉽게 확인이 가능합니다. +The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. -`getRewards()`:를 호출하기 위해 이더스캔을 사용합니다. +Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. 모든 활성화된 활당들에 대한 ID들을 얻기 위해 [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet)를 쿼리합니다. +1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -62,57 +62,57 @@ query indexerAllocations { Use Etherscan to call `getRewards()`: -- [Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract)의 이더스캔 인터페이스를 살펴봅니다. +- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* `getRewards()`:를 호출하기 위해, - - **10번 항목의 getRewards**를 펼칩니다. getRewards dropdown. - - 입력란에 **allocationID**를 입력합니다. - - **Query** 버튼을 클릭합니다. +* To call `getRewards()`: + - Expand the **10. getRewards** dropdown. + - Enter the **allocationID** in the input. + - Click the **Query** button. -### 분쟁이란 무엇이며 어디에서 볼 수 있나요? +### What are disputes and where can I view them? -분쟁 기간 동안 더그래프 상에서 인덱서의 쿼리와 할당은 이의 제기의 요소가 될 수 있습니다. 분쟁 기간은 분쟁의 종류에 따라 다릅니다. 쿼리/귀속 분야에는 7개의 에폭 분쟁 창이 존재하는 반면 할당에는 56개의 에폭이 존재합니다. 이 기간이 지나면 할당이나 쿼리에 대해 분쟁은 발생할 수 없습니다. 분쟁이 열리면 Fishermen에게 최소 10,000 GRT의 디파짓이 요구되며, 이 보증금은 분쟁이 마무리되고 해결이 이루어질 때까지 락업됩니다. Fishermend은 분쟁을 제기한 모든 네트워크 참여자입니다. +Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. -분쟁은 `Disputes` 탭 하부의 인덱서 프로파일 페이지 내의 UI에서 볼 수 있습니다. +Disputes have **three** possible outcomes, so does the deposit of the Fishermen. -- 해당 분쟁이 반려되면, Fishermen이 스테이킹한 GRT가 소각되고 해당 분쟁에서 언급된 인덱서는 슬래싱 삭감을 받지 않습니다. -- 분쟁이 무승부로 결론이 나면, Fishermen들의 예치금은 반환되고 논란이 되고 있는 인덱서는 슬래싱 삭감을 받지 않을 것입니다. -- 만약 해당 분쟁이 받아들여지면, Fishermen들이 예치한 GRT가 반환되고 분쟁 중인 인덱서는 슬래싱 삭감을 받으며, Fishermen은 해당 GRT 삭감 수량의 50%를 얻게 됩니다. +- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. +- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. +- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. -### 쿼리 수수료 리베이트는 무엇이며 언제 배포되나요? +### What are query fee rebates and when are they distributed? -어떠한 할당이 닫히고, Subgraph의 쿼리 수수료 리베이트 풀에 누적될 때마다 게이트웨이에 쿼리 수수료들이 누적됩니다. 리베이트 풀은 인덱서가 그들이 네트워크에 대해 얻는 쿼리 수수료들의 양에 대략적인 비율의 스테이킹 할당을 장려하도록 설계되었습니다. 특정 Indexer에 지급되는 풀의 쿼리 수수료 비율은 Cobbs-Douglas Production Function을 사용하여 계산됩니다; 각 Indexer에게 분배되는 금액은 풀에 대한 그들의 기여도 및 Subgraph에 대한 지분 할당에 관한 함수관계에 있습니다. +Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. -할당이 종료되고 분쟁 기간이 지나면 인덱서에 의해 리베이트가 청구될 수 있습니다. 리베이트 청구 시 쿼리 수수료들은 리베이트는 queryFeeCut 및 위임 풀 비율에 기반하여, 인덱서와 해당 위임자(Delegator)들에게 분배됩니다. +Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. -### query fee cut 및 indexing reward cut는 무엇인가요? +### What is query fee cut and indexing reward cut? -`queryFeeCut` 및 `indexingRewardCut` 값은 Indexer가 해당 Indexer와 Delegator 간의 GRT 분배를 제어하기 위해 CooldownBlocks와 함께 설정할 수 있는 위임 매개 변수입니다. 위임 매개변수 설정에 대한 지침을 위해 [Staking in the Protocol](/indexing#stake-in-the-protocol)의 마지막 단계를 참조하시길 바랍니다. +The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. -- **queryFeeCut** - 서브그래프에 축적되어 인덱서에게 분배 될 쿼리 피 리베이트의 비율(%)입니다. 만약 이 값이 95%로 설정된 경우, 해당 인덱서는 어떠한 분배가 청구될 때, 해당 쿼리 수수료 리베이트 풀의 95%를 가져가게 되고, 나머지 5%는 위임자(Delegator)들에게 분배됩니다. +- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. -- **indexingRewardCut** - 서브그래프 상에 축적되어 인덱서에게 분배 될 인덱싱 보상의 비율(%)입니다. 이 값이 95%로 설정된 경우, 할당이 닫힐 때 인덱서는 인덱싱 보상 풀의 95%를 받고, 위임자들은 나머지 5%를 분배받습니다. +- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. -### 인덱서는 인덱싱할 서브그래프를 어떻게 알 수 있습니까? +### How do indexers know which subgraphs to index? -인덱서는 서브그래프 인덱싱 결정을 위한 고급 기술을 적용하여 스스로 차별화가 가능하지만, 일반적인 아이디어를 제공하기 위해 네트워크에서 서브그래프를 평가하는 데 사용되는 몇 가지 주요 메트릭스에 대해 설명하겠습니다. +Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: -- **큐레이션 신호** - 특정 서브그래프에 적용되는 네트워크 큐레이션 신호의 비율은 특히 쿼리 볼류밍이 증가하는 부트스트랩 단계 동안 해당 서브그래프에 대한 관심을 나타내는 좋은 지표가 됩니다. +- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. -- **축적된 쿼리 수수료** - 특정 서브그래프에 대해 수집된 쿼리 수수료 양에 대한 과거 데이터는 미래 수요를 나타내는 좋은 지표입니다. +- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. -- **스테이킹 수량** - 다른 인덱서의 동작을 모니터링하거나 특정 서브그래프에 할당된 총 지분 비율을 살펴보면 인덱서가 서브그래프 쿼리에 대한 공급 측을 모니터링하여 네트워크가 신뢰하는 서브그래프 또는 더 많은 공급을 필요로 하는 서브그래프를 식별할 수 있습니다. +- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. -- **인덱싱 보상이 없는 서브그래프** - 일부 서브그래프는 IPFS와 같은 지원되지 않는 기능을 사용하거나 메인넷 외부의 다른 네트워크를 쿼리하기 때문에 인덱싱 보상을 생성하지 않습니다. 만약 서브그래프가 인덱싱 보상을 생성하지 않을 경우, 여러분들은 서브그래프 상에서 메세지를 보게 될 것입니다. +- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. -### 하드웨어 요구사항은 어떻게되나요? +### What are the hardware requirements? -- **Small** - 몇몇 서브그래프들에 대한 인덱싱을 시작하기에는 충분하지만, 추후에 더 개선해야할 가능성이 존재합니다. -- **Standard** - 기본 설정이며, 이는 k8s/terraform 배포 매니페스트에서 사용됩니다. -- **Medium** - 100개의 Subgraph 및 초당 200 - 500개의 요청을 서포트 할 수 있는 프로덕션 인덱서입니다. -- **Large** - 현재 사용되는 모든 서브그래프들 및 관련 트레픽 요청의 처리에 대한 요건을 충족합니다. +- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. +- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| @@ -121,48 +121,48 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute | Medium | 16 | 64 | 2 | 32 | 64 | | Large | 72 | 468 | 3.5 | 48 | 184 | -### 인덱서가 취해야 할 기본적인 보안 예방 조치는 무엇인가요? +### What are some basic security precautions an indexer should take? -- **운영자 지갑** - 운영자 지갑을 설정하면 인덱서가 지분을 제어하는 키와 일상적인 작업을 제어하는 키를 분리할 수 있으므로 중요한 예방 조치가 됩니다. 자세한 내용은 [Stake in Protocol](/indexing#stake-in-the-protocol)를 읽어보시기 바랍니다. +- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. -- **중요사항**: 포트들이 공공연하게 공개되는 것에 각별한 주의를 기울이시길 바랍니다. - **어드민 포트**는 반드시 잠겨있어야 합니다. 이는 아래에 자세히 설명된 더그래프 노드 JSON-RPC 및 인덱서 관리 엔드포인트가 포함됩니다. +- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. ## Infrastructure -인덱서 인프라의 중심에는 이더리움을 모니터링하고, 서브그래프 정의에 따라 데이터를 추출하고 로드하여 [GraphQL API](/about/introduction#how-the-graph-works)로 제공하는 그래프 노드가 있습니다. 더그래프 노드는 Ethereum EVM 노드 엔드포인트들과 IPFS 노드(데이터 소싱)에 연결되어야 합니다. 이는 해당 스토리지의 PostgreSQL 데이터베이스 및 네트워크와의 상호 작용을 용이하게 하는 인덱서 구성 요소들입니다. +At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. -- **PostgreSQLPostgreSQL database** - 더그래프 노드의 메인 스토어입니다. 이곳에 서브그래프의 데이터가 저장됩니다. 또한 인덱서서비스 및 에이전트는 데이터베이스를 사용하여 상태 채널 데이터, 비용 모델 및 인덱싱 규칙을 저장합니다. +- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. -- **이더리움 앤드포인트** - 이더리움JSON-RPC API를 노출하는 앤드포인트입니다. 이는 단일 이더리움 클라이언트의 형태를 취하거나 다중에 걸친 로드 밸런싱이 보다 복잡한 설정이 될 수 있습니다. 특정 서브그래프는 Achive mode 및 API 추적 등 특정 이더리움 클라이언트 기능을 필요로 할 것이라는 점을 유념하는 것이 중요합니다. +- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. -- **IPFS 노드(5 미만 버젼)** - 서브그래프 배포 메타데이터는 IPFS네트워크에 보존됩니다. 더그래프 노드는 주로 서브그래프 배포 중에 IPFS 노드에 액세스하여 서브그래프 매니페스트와 연결된 모든 파일을 가져옵니다. 네트워크 인덱서는 자체 IPFS 노드를 호스트할 필요가 없으며 네트워크의 IPFS 노드는 https://ipfs.network.thegraph.com에서 호스팅됩니다. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. -- **인덱서 서비스** - 네트워크와의 모든 필수 외부 커뮤니케이션을 처리합니다. 비용 모델과 인덱싱 상태를 공유하고, 게이트웨이에서 그래프 노드로 쿼리 요청을 전달하며, 게이트웨이를 사용하여 상태 채널을 통해 쿼리 결제를 관리합니다. +- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **인덱서 에이전트** - 네트워크에 등록, 그래프 노드에 대한 서브그래프 배포관리 및 할당 관리를 포함하여 체인에 상에서 인덱서 상호작용을 용이하게 합니다. Prometheus metrics 서버 – 더그래프 노드 및 인덱서 구성요소는 매트릭스 서버에 그들의 매트릭스를 기록합니다. +- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. -참고: 신속한 확장성을 지원하기 위해 쿼리 노드와 인덱스 노드 등 서로 다른 노드 세트간에쿼리 및 인덱싱 문제를 구분할 것을 권고합니다. +Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. -### 포트 개요 +### Ports overview -> **Firewall** - 오직 인덱서 서비스만 공개적으로 노출되어야 하며 관리 포트 및 데이터베이스 액세스를 잠그는데 특히 주의해야 합니다. 그래프 노드 JSON-RPC 엔드포인트(기본 포트: 8030), 인덱서 관리 API 엔드포인트(기본 포트: 18000), Postgres 데이터베이스 엔드포인트(기본 포트: 5432)는 노출되지 않아야 합니다. +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. -#### 그래프 노드 +#### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ------------------------------------------------------------ | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -170,25 +170,25 @@ Disputes can be viewed in the UI in an Indexer's profile page under the `Dispute | ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | | 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Google Cloud상의 Terraform을 사용한 서버 인프라 구축 +### Setup server infrastructure using Terraform on Google Cloud -#### 필수 구성요소 설치 +#### Install prerequisites - Google Cloud SDK - Kubectl command line tool - Terraform -#### Google Cloud Project 생성 +#### Create a Google Cloud Project -- Indexer 저장소 복제 혹은 탐색 +- Clone or navigate to the indexer repository. -- ./terraform 디렉토리로 이동. 여기서 모든 명령들이 실행되어야 합니다. +- Navigate to the ./terraform directory, this is where all commands should be executed. ```sh cd terraform ``` -- Google Cloud에 인증을 한 후, 새 프로젝트를 생성합니다. +- Authenticate with Google Cloud and create a new project. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- 새로운 프로젝트에 대한 결제를 가능하게 하기 위해 Google Cloud Console의 결제 페이지를 사용합니다 +- Use the Google Cloud Console's billing page to enable billing for the new project. -- Google Cloud 구성을 생성합니다. +- Create a Google Cloud configuration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- 요구되는 Google Cloud API들을 사용 가능하도록 설정합니다. +- Enable required Google Cloud APIs. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- 서비스 계정을 생성합니다. +- Create a service account. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- 다음 단계에서 작성될 데이터베이스와 Kubernetes 클러스터 간 피어링을 사용하도록 설정합니다. +- Enable peering between database and Kubernetes cluster that will be created in the next step. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- 최소 terraform 구성 파일을 생성합니다(필요에 따라 업데이트). +- Create minimal terraform configuration file (update as needed). ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### 인프라 생성을 위한 Terraform 사용 +#### Use Terraform to create infrastructure -어떠한 명령이라도 실행하기 전에 [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) 를 읽고, 이 디렉토리에서 `terraform.tfvars` 을 생성합니다. (혹은 이전 단계에서 우리가 생성한 파일을 수정하여 사용하셔도 됩니다.) 기본값을 재정의하거나 값을 설정해야 하는 각 변수에 대해 `terraform.tfvars`에 설정값을 입력합니다. +Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. -- 인프라 구성을 위해 아래의 명령어들을 실행합니다. +- Run the following commands to create the infrastructure. ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -`kubectl apply -k $dir`의 모든 리소스들을 배포합니다. +Download credentials for the new cluster into `~/.kube/config` and set it as your default context. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### 인덱서를 위한 Kubernetes 구성요소 생성 +#### Creating the Kubernetes components for the indexer -- `k8s/overlays`디렉토리를 새로운 `$dir,` 디렉토리에 복사합니다. 그리고 `bases` 엔트리를`$dir/kustomization.yaml` 로 조정하여 `k8s/base`디렉토리로 지정하게 합니다. +- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. -- `$dir`의 모든 파일을 읽고 코멘트에 표시된 대로 값을 조정합니다. +- Read through all the files in `$dir` and adjust any values as indicated in the comments. Deploy all resources with `kubectl apply -k $dir`. -### 그래프 노드 +### Graph Node -[그래프 노드](https://github.com/graphprotocol/graph-node) 는 이벤트가 Ethereum 블록 체인을 소싱하여 GraphQL 엔드포인트를 통해 쿼리할 수 있는 데이터 저장소를 결정적으로 업데이트하는 오픈 소스 러스트 구현입니다. 개발자는 서브그래프를 사용하여 schema를 정의하고, 블록체인과 그래프 노드에서 소싱된 데이터를 변환하기 위한 매핑 세트를 사용하여 전체 체인을 동기화하고, 새로운 블록들을 모니터링하며, GraphQL 엔드포인트를 통해 이를 제공합니다. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. -#### 소스에서 시작하기 +#### Getting started from source -#### 필수 구성 요소 설치 +#### Install prerequisites - **Rust** @@ -307,7 +307,7 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Ubuntu 유저들에 대한 추가 요구사항** - Ubuntu 상에서 그래프 노드를 운영하기 위해서는 몇 가지 추가 패키지들이 요구됩니다. +- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config @@ -315,7 +315,7 @@ sudo apt-get install -y clang libpg-dev libssl-dev pkg-config #### Setup -1. PostgreSQL 데이터베이스 서버를 시작합니다. +1. Start a PostgreSQL database server ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. [그래프 노드](https://github.com/graphprotocol/graph-node) repo를 복사하고 `cargo build` 를 실행하여 소스를 구축합니다. +2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` -3. 이제 모든 종속 요소들이 설정되었으므로, Graph노드를 시작합니다. +3. Now that all the dependencies are setup, start the Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### 도커를 사용하여 시작하기 +#### Getting started using Docker -#### 필수 구성요소 +#### Prerequisites -- **이더리움 노드** - 기본적으로 독커 구성 설정은 여러분들의 호스트 머신에 이더리움을 연결하기 위해 [http://host.docker.internal:8545](http://host.docker.internal:8545) 메인넷을 사용할 것입니다. 여러분들은 `docker-compose.yaml`을 업데이트 함으로써 이 네트워크의 이름 및 url을 변경하실 수 있습니다. +- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. #### Setup -1. 그래프 노드를 복사하고 docker 디렉토리로 이동합니다. +1. Clone Graph Node and navigate to the Docker directory: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. 리눅스 사용자들의 경우에는 `docker-compose.yaml` 내의 `host.docker.internal` 대신 호스트 IP 주소를 사용합니다. 이 때, 아래의 스크립트를 사용합니다. +2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: ```sh ./setup.sh ``` -3. 여러분의 이더리움 엔드포인트에 연결될 로컬 그래프 노드를 시작합니다. +3. Start a local Graph Node that will connect to your Ethereum endpoint: ```sh docker-compose up ``` -### 인덱서 구성요소 +### Indexer components -성공적으로 네트워크에 참여하기 위해서는 거의 지속적인 모니터링과 상호작용이 필요하므로, 저희는 인덱서들의 네트워크 참여를 용이하게 하기 위해 Typescript 어플리케이션 제품군을 구축했습니다. 다음과 같은 세 가지 인덱서 구성요소가 존재합니다. +To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: -- **인덱서 에이전트** - 에이전트는 네트워크와 인덱서의 자체 인프라를 모니터링하고 인덱싱 및 할당되는 서브그래프 배포와 각각에 할당되는 양을 관리합니다. +- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. -- **인덱서 서비스** - 외부에 노출되어야 하는 유일한 구성요소인 이 서비스는 서브그래프 쿼리를 그래프 노드로 전달하고 쿼리 결제를 위한 상태 채널을 관리하며 게이트웨이와 같은 클라이언트에게 중요한 의사 결정 정보를 공유합니다. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. -- **Indexer CLI** - 인덱서 에이전트를 관리하기 위한 명령줄 인터페이스입니다. 이는 인덱서들이 비용모델 및 인덱싱 규칙들을 관리할 수 있도록 합니다. +- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. -#### 시작하기 +#### Getting started -인덱서 에이전트 및 인덱서 서비스는 그래프 노드 인프라와 함께 배치되어야 합니다. 인덱서 구성 요소를 위한 가상 실행 환경을 설정하는 방법은 여러 가지가 있습니다. 여기서는 NPM 패키지 또는 소스를 사용하여 baremetal 상에서 실행하거나, Google Cloud Kubernetes Engine의 Kubernetes 및 Docker를 통해 실행하는 방법에 대해 설명합니다. 이러한 설정 예제가 여러분들의 인프라로 잘 적용되지 않을 경우, 참조를 위한 커뮤니티 가이드가 있을 것입니다. [디스코드 채널](https://thegraph.com/discord) 에 방문하셔서 안녕! 이라고 말해보시길 바랍니다. 여러분들의 인덱서 구성 요소들을 시작하기 전에 반드시 [프로토콜 내에 스테이킹](/indexing#stake-in-the-protocol)을 해야 한다는 것을 기억하시길 바랍니다! +The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! -#### NPM 패키지를 사용할 경우 +#### From NPM packages ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### 소스를 사용할 경우 +#### From source ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### 도커를 사용할 경우 +#### Using docker -- 레지스트리에서 이미지 불러오기 +- Pull images from the registry ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -**참고**: 콘테이너들을 시작한 이후에, 인덱서 서비스는 [http://localhost:7600](http://localhost:7600)에 접근할 수 있으며, 해당 인덱서 에이전트는 [http://localhost:18000/](http://localhost:18000/)에 인덱서 관리 API를 노출하여야 합니다. +Or build images locally from source ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- 구성요소 실행 +- Run the components ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -[Google Cloud상 Terraform 사용하여 서버인프라 구축하기](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) 섹션을 참고하시기 바랍니다. +**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). -#### K9s 및 Terraform을 사용할 경우 +#### Using K8s and Terraform -인덱서 CLI는 `graph indexer`터미널에 접근할 수 있는 [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli)를 위한 플러그인입니다. +See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section -#### 사용 +#### Usage -> **참고**: 모든 런타임 구성 변수는 시작시 명령에 매개변수로 적용되거나 `COMPONENT_NAME_VARIABLE_NAME`(예. `INDEXER_AGENT_ETHEREUM`) 형식의 환경 변수를 사용할 수 있습니다. +> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). -#### 인덱서 에이전트 +#### Indexer agent ```sh graph-indexer-agent start \ @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### 인덱서 서비스 +#### Indexer service ```sh SERVER_HOST=localhost \ @@ -513,7 +513,7 @@ graph-indexer-service start \ | pino-pretty ``` -#### 인덱서 CLI +#### Indexer CLI The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. @@ -522,35 +522,35 @@ graph indexer connect http://localhost:18000 graph indexer status ``` -#### 인덱서 CLI를 사용한 인덱서 관리 +#### Indexer management using indexer CLI -인덱서 에이전트는 해당 인덱서를 대신하여 네트워크와 자동으로 상호 작용하기 위해서는 인덱서로부터의 입력이 필요합니다. 인덱서 에이전트 행동을 정의하는 메커니즘은**인덱싱 규칙**입니다. **인덱싱 규칙**을 사용하여, 인덱서는 인덱싱 하거나 쿼리를 제공하기 위해 서브그래프 선택에 대한 그들의 특별한 전략을 적용할 수 있습니다. 규칙은 에이전트에서 제공하는 GraphQL API를 통해 관리되며 이는 인덱서 관리 API로 알려져 있습니다. **인덱서 관리 API**와 상호작용하기 위해 추천되는 도구는 **Graph CLI**로의 확장인 **Indexer CLI**입니다. +The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. -#### 사용 +#### Usage -**Indexer CLI**는 일반적으로 포트 포워딩을 통해 인덱서 에이전트에 연결되므로 CLI를 동일한 서버 또는 클러스터에서 실행할 필요가 없습니다. 여러분들의 시작에 도움을 드리고, 컨텍스트를 제공하기 위해 CLI에 대해 간략히 설명하도록 하겠습니다. +The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. -- `graph indexer connect ` - 인덱서 관리 API에 연결합니다. 일반적으로 서버에 대한 연결은 포트 포워딩을 통해 열려, CLI는 원격으로 쉽게 작동될 수 있습니다. (예: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - `all`을 ``로 사용하여 하나 혹은 그 이상의 인덱싱 규칙들을 가져오거나, `global`로 사용하여 글로벌 기본값을 가져옵니다. 추가적인 독립변수 `--merged` 는 글로벌 규칙과 병합되도록 특별한 규칙들을 배포하기 위해 특별히 사용될 수 있습니다. 인덱서 에이전트에 적용되는 방법은 이와 같습니다. +- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. -- `graph indexer rules set [options] ...` - 하나 혹은 그 이상의 인덱싱 규칙들을 설정합니다. +- `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - 사용 가능한 경우, 서브그래프 배포 인덱싱을 시작하며, 해당`decisionBasis`를 `always`로 설정합니다. 이를 통해 인덱서 에이전트는 항상 그것을 인덱싱하도록 선택합니다. +- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. -- `graph indexer rules stop [options] ` - 배포에 대한 인덱싱을 정지하며, 해당 `decisionBasis` 를 never로 설정합니다. 이를 통해 인덱싱을 위한 배포들에 관한 결정을 할 때, 이 배포를 건너뜁니다. +- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. -- `graph indexer rules maybe [options] ` — `rules`에 배포를 위한 `thedecisionBasis`를 설정합니다. 이를 통해 인덱서 에이전트는 이 배포를 인덱싱할지 여부를 결정하기 위해 인덱싱 규칙들을 사용하게 됩니다. +- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. -독립변수 `-output`을 사용하여 Output에 규칙들을 나타내는 모든 명령들은 지원되는 출력 형식 중 하나를 선택할 수 있습니다. (`table`, `yaml`, 및 `json`) +All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. -#### 인덱싱 규칙 +#### Indexing rules -인덱싱 규칙은 글로벌 기본값으로 적용되거나, ID들을 사용하여 특정 서브그래프 배포들에 적용될 수 있습니다. 다른 필드들은 모두 선택사항인 반면에, `deployment`와 `decisionBasis` 영역은 필수사항입니다. 인덱싱 규칙에 `rules`가 `decisionBasis`로 되어있는 경우, 인덱서 에이전트는 해당 규칙에 대한 비지정 임계값을 해당 배포를 위해 네트워크에서 가져온 값과 비교합니다. 서브그래프 배포 값이 어떠한 임계값들 이상(혹은 이하)이면, 이는 인덱싱을 위해 선택됩니다. +Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -예를 들어, 만약에 해당 글로벌 규칙이 **5** (GRT)의 `minStake`를 포함하면, 5개 이상의 GRT 지분이 할당된 모든 서브그래프들은 인덱싱됩니다. 임계값 규칙들은 `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, 그리고 `minAverageQueryFees`를 포함합니다. +For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. -데이터 모델: +Data model: ```graphql type IndexingRule { @@ -573,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### 비용 모델 +#### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. 비용모델들은 마켓 및 쿼리 속성을 기반으로 한 쿼리들에 대한 동적 가격 책정을 제공합니다 인덱서 서비스는 쿼리에 응답하려는 각 서브그래프의 게이트웨이와 비용모델을 공유합니다. 결국, 게이트웨이는 쿼리당 인덱서 선택 결정 및 선택된 인덱서와의 지불 협상을 위해 비용 모델을 사용합니다. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. #### Agora -Agora 언어는 쿼리들에 대한 비용 모델을 공고하기 위한 유연한 형식을 제공합니다. Agora 가격 모델은 GraphQL 쿼리의 각 최상위 쿼리에 대해 순서대로 실행되는 일련의 성명입니다. 각 최상위 쿼리에 대해 일치하는 첫 번째 성명이 해당 쿼리의 가격을 결정합니다. +The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. -성명은 GraphQL 쿼리를 일치시키는 데 사용되는 술어와 평가 시 비용을 소수점 단위의 GRT로 나타내는 비용 식으로 구성됩니다. 쿼리의 명명된 인수 위치에 있는 값은 술어에서 캡처되어 식에 사용될 수 있습니다. 어떠한 표현식에서 플레이스 홀더들을 위해 전체 내용은 설정 및 대체될 수도 있습니다. +A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. -위의 모델을 사용하는 쿼리 가격책정 예시: +Example cost model: ``` # This statement captures the skip value, @@ -596,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -비용 모델 예시: +Example query costing using the above model: | Query | Price | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### 해당 비용 모델 적용 +#### Applying the cost model -비용 모델은 데이터베이스에 저장하기 위해 인덱서 에이전트의 인덱서 관리 API로 비용 모델들을 전달하는 인덱서 CLI를 통해 적용됩니다. 그런 다음 이들에 대한 요청이 있을 때 마다, 해당 인덱서 서비스는 이들을 선정하여 게이트웨이들에 해당 비용 모델들을 제공합니다. +Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## 네트워크와의 상호작용 +## Interacting with the network -### 프로토콜에 스테이킹하기 +### Stake in the protocol -네트워크에 인덱서로 참여하기 위한 첫 번째 단계는 프로토콜을 승인하고, 자금을 스테이킹하며, 일상적인 프로토콜 상호 작용을 위한 운영자 주소를 설정하는 것(선택적)입니다. \_ **참고**: 본 지침의 목적을 위하여 컨트렉트 상호작용에 리믹스가 사용 되지만,원하시는 툴 사용에 개의치 마시기 바랍니다.([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) 및 [MyCrypto](https://www.mycrypto.com/account)는 알려진 몇 가지 다른 툴입니다.) +The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ -인덱서에 의해 생성된 이후, 건강한 할당은 4가지 상태를 거칩니다. +Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. -#### 토큰 승인 +#### Approve tokens -1. 브라우저에서 [Remix app](https://remix.ethereum.org/)을 엽니다. +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. `File Explorer`에 [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json)와 함께 **GraphToken.abi**로 명명된 파일을 생성합니다. +2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. 해당 에디터에서 선택되고 열린 `GraphToken.abi`를 통해 Remix 인터페이스에서 `Deploy` 및 `Run Transactions` 섹션으로 전환합니다. +3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. -4. 환경에서 `Injected Web3`를 선택하고, `Account`에서 여러분의 인덱서 주소를 선택합니다. +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. - `At Address`옆에 그래프 토큰 컨트렉트 주소를 붙여 넣습니다.(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) 이후 `At address` 버튼을 클릭하여 적용합니다. +5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. -6. 스테이킹 컨트렉트를 승인하기 위해 `approve(spender, amount)` 기능을 불러옵니다. `spender`에 스테이킹 컨트렉트 주소 (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`)를 채워넣고, `amount`에 스테이킹 할 토큰과 함께 수량을 입력합니다. +6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). -#### 토큰 스테이킹 +#### Stake tokens -1. 브라우저에서 [Remix app](https://remix.ethereum.org/)을 엽니다. +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. `File Explorer`에 staking ABI와 함께 **Staking.abi**로 명명된 파일을 생성합니다. +2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. -3. 에디터에서 선택되고 열린 `Staking.abi`를 통해, Remix 인터페이스에서 `Deploy` 및 `Run Transactions` 섹션으로 전환합니다. +3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. -4. 환경에서 `Injected Web3`를 선택하고, `Account`에서 여러분의 인덱서 주소를 선택합니다. +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. - `At Address` 옆에 스테이킹 컨트렉트 주소(`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) 를 붙여넣고, `At address`버튼을 클릭하여 적용합니다. +5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. -6. 프로토콜에 GRT를 스테이킹 하기 위해 `stake()`를 호출합니다. +6. Call `stake()` to stake GRT in the protocol. -7. (선택 사항) 인덱서는 자금을 제어하는 키를 서브그래프 할당 및 (유료) 쿼리 제공과 같은 일상적인 작업을 수행하는 키로부터 분리하기 위해 인덱서 인프라의 운영자로 다른 주소를 승인할 수 있습니다. 운영자 설정을 위해 해당 운영자 주소와 함께 `setOperator()`를 호출합니다. +7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (선택사항) Indexer들은 보상의 분배를 제어하고 전략적으로 위임자들을 끌어들이기 위해 그들의 indexingRewardCut(백만 개 당), queryFeecut(백만개 당) 그리고 cooldownBlocks(블록들의 수)를 업데이트 함으로써 그들의 위임 매개 변수를 업데이트 할 수 있습니다. 이를 위해 `setDelegationParameters()`를 호출합니다. 아래의 예제는 쿼리 보상의 95%를 인덱서에게 분배하고, 5%를 위임자들에게 분배하도록 queryFeeCut을 설정하고, 인덱싱 리워드의 60%를 Indexer에게 분배하고, 40%를 위임자들에게 분배하도록 설정하며, `thecooldownBlocks`의 기간을 500블록으로 설정합니다. +8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) ``` -### 할당의 수명 +### The life of an allocation After being created by an indexer a healthy allocation goes through four states. -- **활성** - 어떠한 할당이 온체인상에 생성되면([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)), 이는 **활성**으로 간주됩니다. 인덱서 자체 및/또는 위임된 지분 일부가 서브그래프 배포에 할당되고, 이는 그들이 인덱싱 보상을 청구하고 해당 서브그래프 배포에 대한 쿼리를 제공할 수 있도록 합니다. 해당 인덱서 에이전트는 인덱서 규칙에 의거하여 할당 생성을 관리합니다. +- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. -- **종료** - 인덱서는 1 Epoch가 지나면 할당을 종료할 수 있습니다([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)). 이외에도 해당 인덱서 에이전트는 **maxAllocationEpochs**(현재 28일) 가 지난 후 할당을 자동으로 종료합니다. 유효한 인덱싱 증명(POI)으로 할당이 종료되면 해당 인덱싱 보상이 인덱서 및 해당 위임자들에게 배포됩니다(자세한 내용은 아래의 "보상은 어떻게 분배되나요?" +- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). -- **완결** - 할당이 종료되면 분쟁 기간이 존재하며, 이 분쟁기간 이후에 해당 할당이 **완결**된 것으로 간주되며, 쿼리 수수료 리베이트 또한 클레임(claim()) 가능해집니다. 인덱서 에이전트는 네트워크를 모니터링하여 **완결** 상태인 할당들을 탐지하고 구성 가능한(선택 사항) 임계값인 **—-allocation-claim-threshold**을 초과할 경우 이들을 청구합니다. +- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. -- **청구 완료** - 할당의 최종 상태입니다. - 활성 할당으로 모든 과정을 실행하고, 모든 적격 보상이 배포되었으며 쿼리 수수료 리베이트들이 청구된 상태입니다. +- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. From a4e09b752c1796dc95dbc075d660a338f806f76e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:28 -0500 Subject: [PATCH 088/241] New translations indexing.mdx (Chinese Simplified) --- pages/zh/indexing.mdx | 394 +++++++++++++++++++++--------------------- 1 file changed, 197 insertions(+), 197 deletions(-) diff --git a/pages/zh/indexing.mdx b/pages/zh/indexing.mdx index 5a3f24a80d4e..40d1085c602f 100644 --- a/pages/zh/indexing.mdx +++ b/pages/zh/indexing.mdx @@ -4,47 +4,47 @@ title: 索引 import { Difficulty } from '@/components' -索引人是 The Graph 网络中的节点运营商,他们质押 Graph 通证 (GRT) 以提供索引和查询处理服务。 索引人通过他们的服务赚取查询费和索引奖励。 他们还根据 Cobbs-Douglas 回扣函数从回扣池中赚取收益,该回扣池与所有网络贡 ​​ 献者按他们的工作成比例共享。 +Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. -抵押在协议中的 GRT 会受到解冻期的影响,如果索引人是恶意的并向应用程序提供不正确的数据或索引不正确,则可能会被削减。 索引人也可以从委托人那里获得委托,为网络做出贡献。 +GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. -索引人根据子图的策展信号选择要索引的子图,其中策展人质押 GRT 以指示哪些子图是高质量的并应优先考虑。 消费者(例如应用程序)还可以设置索引人处理其子图查询的参数,并设置查询费用定价的偏好。 +Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. -## 常见问题 +## FAQ -### 成为网络索引人所需的最低股份是多少? +### What is the minimum stake required to be an indexer on the network? -索引人的最低抵押数量目前设置为 10w 个 GRT。 +The minimum stake for an indexer is currently set to 100K GRT. -### 索引人的收入来源是什么? +### What are the revenue streams for an indexer? -**查询费返利** - 为网络上的查询服务支付的费用. 这些支付通过索引人和网关之间的状态通道进行调解。 These payments are mediated via state channels between an indexer and a gateway. 来自网关的每个查询请求都包含一个支付和相应的响应,一个查询结果有效性的证明。 来自网关的每个查询请求都包含一个支付和相应的响应,一个查询结果有效性的证明。 +**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**索引奖励** - 通过每年 3%的协议范围通货膨胀产生,索引奖励分配给为网络进行子图部署索引的索引人。 +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. -### 奖励如何分配? +### How are rewards distributed? -索引奖励来自协议通胀,每年发行量设定为 3%。 它们根据每个子图上所有策展信号的比例分布在子图上,然后根据他们在该子图上分配的股份按比例分配给索引人。 **一项分配必须以符合仲裁章程规定的标准的有效索引证明(POI)来结束,才有资格获得奖励。** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -社区创建了许多用于计算奖励的工具,您会在 [“社区指南”集合](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c)中找到它们。 您还可以在 [Discord 服务器](https://discord.gg/vtvv7FP)上的 #delegators 和 #indexers 频道 ​​ 中找到最新的工具列表。 +Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). -### 什么是索引证明 (POI)? +### What is a proof of indexing (POI)? -网络中使用 POI 来验证索引人是否正在索引它们分配的子图。 在关闭该分配的分配时,必须提交当前时期第一个区块的 POI,才有资格获得索引奖励。 块的 POI 是特定子图部署的所有实体存储事务的摘要,直到并包括该块。 +POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. -### 索引奖励什么时候发放? +### When are indexing rewards distributed? -分配在活跃时不断累积奖励。 奖励由索引人收集,并在分配结束时分发。 这可以手动发生,每当索引人想要强制关闭它们时,或者在 28 个时期后,委托人可以关闭索引人的分配,但这会导致没有奖励被铸造。 28 个时期 是最大分配生命周期(现在,一个 时期持续约 24 小时)。 +Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). -### 可以监控待处理的索引人奖励吗? +### Can pending indexer rewards be monitored? -许多社区制作的仪表板包括待处理的奖励值,可以通过以下步骤轻松地手动检查它们: +The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. -使用 Etherscan 调用`getRewards()`: +Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. 查询主网[子图](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) 以获取所有活动分配的 ID: +1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -60,135 +60,135 @@ query indexerAllocations { } ``` -使用Etherscan调用 `getRewards()`: +Use Etherscan to call `getRewards()`: -- 导航到[奖励合约的 Etherscan 界面](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* 调用`getRewards()`: - - 展开 **10. getRewards** 下拉菜单。 getRewards dropdown. - - 在输入中输入**分配 ID**. - - 点击**查询**按钮. +* To call `getRewards()`: + - Expand the **10. getRewards** dropdown. + - Enter the **allocationID** in the input. + - Click the **Query** button. -### 什么是争议? 在哪里可以查看? +### What are disputes and where can I view them? -在争议期间,索引人的查询和分配都可以在 The Graph 上进行争议。 争议期限因争议类型而异。 查询/证明有 7 个时期的争议窗口,而分配有 56 个时期。 在这些期限过后,不能对分配或查询提出争议。 当争议开始时,渔夫需要至少 10,000 GRT 的押金,押金将被锁定,直到争议结束并给出解决方案。 渔夫是任何引发争议的网络参与者。 +Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. -可以在 UI 中的索引人配置文件页面中的 `Disputes` 选项卡下查看争议 。 +Disputes have **three** possible outcomes, so does the deposit of the Fishermen. -- 如果争议被驳回,渔夫存入的 GRT 将被烧毁,争议的 索引人将不会被削减。 -- 如果以平局方式解决争议,渔夫的押金将被退还,并且争议的索引人不会被削减。 -- 如果争议被接受,渔夫存入的 GRT 将被退回,有争议的 索引人将被削减,渔夫将获得被削减的 GRT 的 50%。 +- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. +- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. +- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. -争议可以在用户界面中的 `争议 `标签下的索引人档案页中查看。 +Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. -### 什么是查询费奖励? 何时发放? +### What are query fee rebates and when are they distributed? -每当分配关闭并累积在子图的查询费用回扣池中时,网关就会收取查询费用。 回扣池旨在鼓励索引人按他们为网络赚取的查询费用的粗略比例分配股份。 池中分配给特定索引人的查询费用部分使用 Cobbs-Douglas 生产函数计算;每个索引人的分配量是他们对池的贡献和他们在子图上的股份分配的函数。 +Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. -一旦分配已结束且争议期已过,索引人就可以要求回扣。 查询费用回扣根据查询费用减免和委托池比例分配给索引人及其委托人。 +Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. -### 什么是查询费减免和索引奖励减免? +### What is query fee cut and indexing reward cut? -`queryFeeCut` 和 `indexingRewardCut` 值是委托的参数,该索引可以设置连同 cooldownBlocks 控制 GRT 的索引和他们的委托人之间的分配。 有关设置委托参数的说明,请参阅[协议中的质押](/indexing#stake-in-the-protocol)的最后步骤。 +The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. -- **查询费用削减** - 在将分配给索引人的子图上累积的查询费用回扣的百分比。 如果将其设置为 95%,则在申请分配时,索引人将获得查询费用回扣池的 95%,另外 5% 将分配给委托人。 +- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. -- **索引奖励削减** - 将分配给索引人的子图上累积的索引奖励的百分比。 如果将其设置为 95%,则当分配结束时,索引人将获得索引奖励池的 95%,而委托人将分配其他 5%。 +- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. -### 索引人如何知道要索引哪些子图? +### How do indexers know which subgraphs to index? -索引人基础设施的中心是 Graph 节点,它监控 Ethereum,根据子图定义提取和加载数据,并以 [GraphQL API](/about/introduction#how-the-graph-works)形式为其服务 Graph 节点需要连接到 Ethereum EVM 节点端点,以及 IPFS 节点,用于采购数据;PostgreSQL 数据库用于其存储;以及索引人组件,促进其与网络的交互。 +Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: -- **策展信号** - 应用于特定子图的网络策展信号的比例是对该子图兴趣的一个很好的指标,尤其是在引导阶段,当查询量不断上升时。 +- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. -- **收取的查询费** - 特定子图收取的查询费的历史数据是未来需求的良好指标。 +- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. -- **质押量** - 监控其他索引人的行为或查看分配给特定子图的总质押量的比例,可以让索引人监控子图查询的供应方,以确定网络显示出信心的子图或可能显示出需要更多供应的子图。 +- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. -- **没有索引奖励的子图** - 一些子图不会产生索引奖励,主要是因为它们使用了不受支持的功能,如 IPFS,或者因为它们正在查询主网之外的另一个网络。 如果子图未生成索引奖励,您将在子图上看到一条消息。 +- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. -### 对硬件有什么要求? +### What are the hardware requirements? -- **小型** - 足以开始索引几个子图,可能需要扩展。 -- **标准** - 默认设置,这是在 k8s/terraform 部署清单示例中使用的。 -- **中型** - 生产型索引人支持 100 个子图和每秒 200-500 个请求。 -- **大型** -准备对当前使用的所有子图进行索引,并为相关流量的请求提供服务 +- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. +- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. -| 类型 | (CPU 数量) | (内存 GB) | (硬盘 TB) | (CPU 数量) | (内存 GB) | -| -- |:--------:|:-------:|:-------:|:--------:|:-------:| -| 小型 | 4 | 8 | 1 | 4 | 16 | -| 标准 | 8 | 30 | 1 | 12 | 48 | -| 中型 | 16 | 64 | 2 | 32 | 64 | -| 大型 | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | -### 索引人应该采取哪些基本的安全防范措施? +### What are some basic security precautions an indexer should take? -- **操作员钱包** -设置操作员钱包是一项重要的预防措施,因为它允许索引人将控制权益的密钥和控制日常操作的钥匙分开。 有关说明请参见[协议中的内容](/indexing#stake-in-the-protocol) 介绍。 +- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. -- **防火墙** - 只有索引人服务需要公开,尤其要注意锁定管理端口和数据库访问:Graph 节点 JSON-RPC 端点(默认端口:8030)、索引人管理 API 端点(默认端口:18000)和 Postgres 数据库端点(默认端口:5432)不应暴露。 +- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. -## 基础设施 +## Infrastructure -索引人基础设施的中心是Graph节点,它监控以太坊,根据子图定义提取和加载数据,并将其作为[GraphQL API](/about/introduction#how-the-graph-works)提供。 The Graph节点需要连接到以太坊EVM节点端点,以及用于获取数据的IPFS节点;一个用于存储的PostgreSQL数据库;以及促进其与网络互动的索引人组件。 +At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. -- **PostgreSQL 数据库** - Graph 节点的主要存储,这是存储子图数据的地方。 索引人服务和代理也使用数据库来存储状态通道数据、成本模型和索引规则。 +- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. -- **Ethereum endpoint** -公开 Ethereum JSON-RPC API 的端点。 这可能采取单个 Ethereum 客户端的形式,也可能是一个更复杂的设置,在多个客户端之间进行负载平衡。 需要注意的是,某些子图将需要特定的 Ethereum 客户端功能,如存档模式和跟踪 API。 +- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. -- ** IPFS 节点(版本小于 5)** - 子图部署元数据存储在 IPFS 网络上。 The Graph节点在子图部署期间主要访问IPFS节点,以获取子图清单和所有链接文件。 网络索引人不需要托管自己的IPFS节点,网络的IPFS节点是托管在https://ipfs.network.thegraph.com。 +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. -- **索引人服务** -处理与网络的所有必要的外部通信。 共享成本模型和索引状态,将来自网关的查询请求传递给一个 Graph 节点,并通过状态通道与网关管理查询支付。 +- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **索引人代理** - 促进索引人在链上的交互,包括在网络上注册,管理子图部署到其 Graph 节点,以及管理分配。 Prometheus 指标服务器- Graph 节点 和 Indexer 组件将其指标记录到指标服务器。 +- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. -注意:为了支持敏捷扩展,建议在不同的节点集之间分开查询和索引问题:查询节点和索引节点。 +Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. -### 端口概述 +### Ports overview -> **重要**: 公开暴露端口时要小心 - **管理端口** 应保持锁定。 这包括下面详述的 Graph 节点 JSON-RPC 和索引人管理端点。 +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. -#### Graph 节点 +#### Graph Node -| 端口 | 用途 | 路径 | CLI参数 | 环境 变量 | -| ---- | ------------------------------------ | ------------------------------------------------------------------- | ----------------- | ----- | -| 8000 | GraphQL HTTP 服务
(用于子图查询) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(用于子图订阅) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(用于管理部署) | / | --admin-port | - | -| 8030 | 子图索引状态 API | /graphql | --index-node-port | - | -| 8040 | Prometheus 指标 | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | -#### 索引人服务 +#### Indexer Service -| 端口 | 用途 | 路径 | CLI参数 | 环境 变量 | -| ---- | -------------------------------------- | --------------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP 服务
(用于付费子图查询) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus 指标 | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | -#### 索引人代理 +#### Indexer Agent -| 端口 | 用途 | 路径 | CLI参数 | 环境
变量 | -| ---- | --------- | -- | ------------------------- | --------------------------------------- | -| 8000 | 索引人管理 API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | +| 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Google Cloud 上使用 Terraform 建立基础架构 +### Setup server infrastructure using Terraform on Google Cloud -#### 安装先决条件 +#### Install prerequisites -- 谷歌云 SDK -- Kubectl 命令行工具 +- Google Cloud SDK +- Kubectl command line tool - Terraform -#### 创建一个谷歌云项目 +#### Create a Google Cloud Project -- 克隆或导航到索引人存储库。 +- Clone or navigate to the indexer repository. -- 导航到./terraform 目录,这是所有命令应该执行的地方。 +- Navigate to the ./terraform directory, this is where all commands should be executed. ```sh cd terraform ``` -- 通过谷歌云认证并创建一个新项目。 +- Authenticate with Google Cloud and create a new project. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- 使用 Google Cloud Console 的计费页面为新项目启用计费。 +- Use the Google Cloud Console's billing page to enable billing for the new project. -- 创建谷歌云配置。 +- Create a Google Cloud configuration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- 启用所需的 Google Cloud API。 +- Enable required Google Cloud APIs. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- 创建一个服务账户。 +- Create a service account. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- 启用将在下一步中创建的数据库和 Kubernetes 集群之间的对等连接。 +- Enable peering between database and Kubernetes cluster that will be created in the next step. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- 创建最小的 terraform 配置文件(根据需要更新)。 +- Create minimal terraform configuration file (update as needed). ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### 使用 Terraform 创建基础设施 +#### Use Terraform to create infrastructure -在运行任何命令之前,先阅读 [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) 并在这个目录下创建一个文件`terraform.tfvars`(或者修改我们在上一步创建的文件)。 对于每一个想要覆盖默认值的变量,或者需要设置值的变量,在 `terraform.tfvars`中输入一个设置。 +Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. -- 运行以下命令来创建基础设施。 +- Run the following commands to create the infrastructure. ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -用`kubectl apply -k $dir`. 部署所有资源。 +Download credentials for the new cluster into `~/.kube/config` and set it as your default context. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### 为索引人创建 Kubernetes 组件 +#### Creating the Kubernetes components for the indexer -- 将目录`k8s/overlays` 复制到新的目录 `$dir,` 中,并调整`bases` 中的`$dir/kustomization.yaml`条目,使其指向目录`k8s/base`。 +- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. -- 读取`$dir`中的所有文件,并按照注释中的指示调整任何值。 +- Read through all the files in `$dir` and adjust any values as indicated in the comments. -用以下方法部署所有资源`kubectl apply -k $dir`. +Deploy all resources with `kubectl apply -k $dir`. -### Graph 节点 +### Graph Node -[Graph 节点](https://github.com/graphprotocol/graph-node) 是一个开源的 Rust 实现,它将 Ethereum 区块链事件源化,以确定地更新一个数据存储,可以通过 GraphQL 端点进行查询。 开发者使用子图来定义他们的模式,以及一组用于转换区块链来源数据的映射,Graph 节点处理同步整个链,监控新的区块,并通过 GraphQL 端点提供服务。 +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. -#### 从来源开始 +#### Getting started from source -#### 安装先决条件 +#### Install prerequisites - **Rust** @@ -307,15 +307,15 @@ kubectl config use-context $(kubectl config get-contexts --output='name' - **IPFS** -- **Ubuntu 用户的附加要求** - 要在 Ubuntu 上运行 Graph 节点,可能需要一些附加的软件包。 +- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### 类型 +#### Setup -1. 启动 PostgreSQL 数据库服务器 +1. Start a PostgreSQL database server ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. 克隆[Graph 节点](https://github.com/graphprotocol/graph-node)repo,并通过运行 `cargo build`来构建源代码。 +2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` -3. 现在,所有的依赖关系都已设置完毕,启动 Graph 节点。 +3. Now that all the dependencies are setup, start the Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### 使用 Docker +#### Getting started using Docker -#### 先决条件 +#### Prerequisites -- **Ethereum 节点** - 默认情况下,docker 编译设置将使用 mainnet:[http://host.docker.internal:8545](http://host.docker.internal:8545) 连接到主机上的 Ethereum 节点。 你可以通过更新 `docker-compose.yaml`来替换这个网络名和 url。 +- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. -#### 安装 +#### Setup -1. 克隆 Graph 节点并导航到 Docker 目录。 +1. Clone Graph Node and navigate to the Docker directory: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. 仅适用于 linux 用户 - 在`docker-compose.yaml`中使用主机 IP 地址代替 `host.docker.internal`并使用附带的脚本。 +2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: ```sh ./setup.sh ``` -3. 启动一个本地 Graph 节点,它将连接到你的 Ethereum 端点。 +3. Start a local Graph Node that will connect to your Ethereum endpoint: ```sh docker-compose up ``` -### 索引人组件 +### Indexer components -要成功地参与网络,需要几乎持续的监控和互动,所以我们建立了一套 Typescript 应用程序,以方便索引人的网络参与。 有三个索引人组件。 +To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: -- **索引人代理** - 代理监控网络和索引人自身的基础设施,并管理哪些子图部署被索引和分配到链上,以及分配到每个子图的数量。 +- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. -- **索引人服务** - 唯一需要对外暴露的组件,该服务将子图查询传递给节点,管理查询支付的状态通道,将重要的决策信息分享给网关等客户端。 +- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. -- **索引人 CLI** - 用于管理索引人代理的命令行界面。 它允许索引人管理成本模型和索引规则。 +- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. -#### 开始 +#### Getting started -索引人代理和索引人服务应该与你的 Graph 节点基础架构共同定位。 有很多方法可以为你的索引人组件设置虚拟执行环境,这里我们将解释如何使用 NPM 包或源码在裸机上运行它们,或者通过谷歌云 Kubernetes 引擎上的 kubernetes 和 docker 运行。 如果这些设置实例不能很好地转化为你的基础设施,很可能会有一个社区指南供参考,请到[Discord](https://thegraph.com/discord)上打招呼。 在启动你的索引人组件之前,请记住[在协议中签名](/indexing#stake-in-the-protocol)! +The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! -#### 来自 NPM 包 +#### From NPM packages ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### 来自来源 +#### From source ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### 使用 docker +#### Using docker -- 从注册表中提取图像 +- Pull images from the registry ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -**注意**: 启动容器后,索引人服务应该在[http://localhost:7600](http://localhost:7600) 索引人代理应该在[http://localhost:18000/](http://localhost:18000/)。 +Or build images locally from source ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- 运行组件 +- Run the components ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -请参阅 [在 Google Cloud 上使用 Terraform 设置服务器基础架构](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) 一节。 +**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). -#### 使用 K8s 和 Terraform +#### Using K8s and Terraform -Indexer CLI 是 [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) 的一个插件,可以在终端的`graph indexer`处访问。 +See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section -#### 使用方法 +#### Usage -> **注意**: 所有的运行时配置变量可以在启动时作为参数应用到命令中,也可以使用格式为 `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`) 的环境变量。 +> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). -#### 索引代理 +#### Indexer agent ```sh graph-indexer-agent start \ @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### 索引人服务 +#### Indexer service ```sh SERVER_HOST=localhost \ @@ -513,44 +513,44 @@ graph-indexer-service start \ | pino-pretty ``` -#### 索引人 CLI +#### Indexer CLI -Indexer CLI是一个可以在终端访问`graph indexer`的插件,地址是[`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli)。 +The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### 使用索引人 CLI 管理索引人 +#### Indexer management using indexer CLI -索引人代理需要来自索引人的输入,才能代表索引人自主地与网络交互。 定义索引人代理行为的机制是**索引规则**. 使用**索引规则**,索引人可以应用其特定的策略来选择子图进行索引和服务查询。 使用**索引规则** ,索引人可以应用他们特定的策略来挑选子图,为其建立索引和提供查询。 规则是通过由代理提供的 GraphQL API 来管理的,被称为索引人管理 API。 与**索引管理 API**交互的建议工具是 **索引人 CLI** ,它是 **Graph CLI**的扩展。 +The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. -#### 使用方法 +#### Usage -**索引人 CLI ** 连接到索引人代理,通常是通过端口转发,因此 CLI 不需要运行在同一服务器或集群上。 为了帮助你入门,并提供一些上下文,这里将简要介绍 CLI。 +The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. -- `graph indexer connect ` - 连接到索引人管理 API。 通常情况下,与服务器的连接是通过端口转发打开的,所以 CLI 可以很容易地进行远程操作。 (例如: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` -获取一个或多个索引规则,使用 `all` 作为`` 来获取所有规则,或使用 global 来获取全局默认规则。 可以使用额外的参数 `--merged` 来指定将特定部署规则与全局规则合并。 这就是它们在索引人代理中的应用方式。 +- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. -- `graph indexer rules set [options] ...` -设置一个或多个索引规则。 +- `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - 开始索引子图部署(如果可用),并将其`decisionBasis`设置为`always`, 这样索引人代理将始终选择对其进行索引。 如果全局规则被设置为总是,那么网络上所有可用的子图都将被索引。 +- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. -- `graph indexer rules stop [options] ` -停止对某个部署进行索引,并将其 `decisionBasis`设置为 never, 这样它在决定要索引的部署时就会跳过这个部署。 +- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. -- `graph indexer rules maybe [options] ` —将部署的 `thedecisionBasis`设置为`规则`, 这样索引人代理将使用索引规则来决定是否对这个部署进行索引。 +- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. -所有在输出中显示规则的命令都可以使用 `-output`参数在支持的输出格式(`table`, `yaml`, and `json`)之间进行选择 +All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. -#### 索引规则 +#### Indexing rules -索引规则既可以作为全局默认值应用,也可以用于使用其 ID 的特定子图部署。 `deployment` 和 `decisionBasis`字段是强制性的,而所有其他字段都是可选的。 当索引规则`rules` 作为`decisionBasis`时, 索引人代理将比较该规则上的非空阈值与从相应部署的网络获取的值。 如果子图部署的值高于(或低于)任何阈值,它将被选择用于索引。 +Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -例如,如果全局规则的`minStake` 值为**5** (GRT), 则分配给它的权益超过 5 (GRT) 的任何子图部署都将被编入索引。 阈值规则包括`maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, 和 `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. -数据模型: +Data model: ```graphql type IndexingRule { @@ -573,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### 成本模式 +#### Cost models -成本模型根据市场和查询属性为查询提供动态定价。 索引服务处与网关共享每个子网的成本模型,它们打算对每个子网的查询作出回应。 而网关则使用成本模型来做出每个查询的索引人选择决定,并与所选的索引人进行付费谈判。 +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. #### Agora -Agora 语言提供了一种灵活的格式来声明查询的成本模型。 Agora 价格模型是一系列的语句,它们按照 GraphQL 查询中每个顶层查询的顺序执行。 对于每个顶层查询,第一个与其匹配的语句决定了该查询的价格。 +The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. -语句由一个用于匹配 GraphQL 查询的谓词和一个成本表达式组成,该表达式在评估时输出一个以十进制 GRT 表示的成本。 查询的命名参数位置中的值可以在谓词中捕获并在表达式中使用。 也可以在表达式中设置全局,并代替占位符。 +A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. -使用上述模型的查询成本计算示例。 +Example cost model: ``` # This statement captures the skip value, @@ -596,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -成本模型示例: +Example query costing using the above model: -| 询问 | 价格 | +| Query | Price | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### 应用成本模式 +#### Applying the cost model -成本模型是通过索引人 CLI 应用的,CLI 将它们传递给索引人代理的索引人管理 API,以便存储在数据库中。 然后,索引人服务将接收这些模型,并在网关要求时将成本模型提供给它们。 +Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## 与网络的交互 +## Interacting with the network -### 在协议中进行质押 +### Stake in the protocol -作为索引人参与网络的第一步是批准协议、质押资金,以及(可选)设置一个操作员地址以进行日常协议交互。 _ **注意**: 在这些说明中,Remix 将用于合约交互,但请随意使用您选择的工具([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), 和[MyCrypto](https://www.mycrypto.com/account) 是其他一些已知的工具)._ +The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ -被索引人创建后,一个健康的配置会经历四种状态。 +Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. -#### 批准令牌 +#### Approve tokens -1. 在浏览器中打开[Remix app](https://remix.ethereum.org/) 。 +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. 使用[token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json).在`File Explorer`文件夹中创建一个名为**GraphToken.abi**的文件。 +2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. 在编辑器中选择`GraphToken.abi` 并打开,切换到部署 `Run Transactions` 选项中。 +3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. -4. 环境选择`Injected Web3`并在`Account` 下面选择你的索引人地址。 +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. 设置 GraphToken 合约地址 - 将 GraphToken 地址(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) 粘贴到`At Address` 旁边 ,单击,`At address` 按钮。 +5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. -6. 调用`approve(spender, amount)`函数以批准 Staking 合约。 用质押合约地址(`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) 填写`spender` ,`amount` 要质押的代币数量 (in wei). +6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). -#### 质押代币 +#### Stake tokens -1. 在浏览器中打开[Remix app](https://remix.ethereum.org/)。 +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. 在 `File Explorer` 创建一个名为**Staking.abi** 的文件中,使用 staking ABI. +2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. -3. 在编辑器中选择`GraphToken.abi` 并打开,切换到部署 `Run Transactions` 选项中。 +3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. -4. 在环境选择`Injected Web3` 然后`Account` s 选择您的索引人地址。 +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. 设置 GraphToken 合约地址 - 将 GraphToken 地址(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) 粘贴到`At Address` 旁边 ,单击,`At address` 按钮。 +5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. -6. 调用 `stake()` 质押 GRT。 +6. Call `stake()` to stake GRT in the protocol. -7. (可选)索引人可以批准另一个地址作为其索引人基础设施的操作员,以便将控制资金的密钥与执行日常操作,例如在子图上分配和服务(付费)查询的密钥分开。 用`setOperator()` 设置操作员地址。 +7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. 可选)为了控制奖励的分配和战略性地吸引委托人,索引人可以通过更新他们的索引人奖励削减(百万分之一)、查询费用削减(百万分之一)和冷却周期(块数)来更新他们的委托参数。 使用 `setDelegationParameters()`设置。 以下示例设置查询费用削减将 95% 的查询返利分配给索引人,5% 给委托人,设置索引人奖励削减将 60% 的索引奖励分配给索引人,将 40% 分配给委托人,并将`冷却周期`设置为 500 个区块。 +8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) ``` -### 分配的生命周期 +### The life of an allocation After being created by an indexer a healthy allocation goes through four states. -- **活跃** -一旦在链上创建分配([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) 它就被认为是**活跃**。 索引人自身和/或被委托的一部分权益被分配给子图部署,这使得他们可以要求索引奖励并为该子图部署提供查询。 索引人代理根据索引人规则管理创建分配。 +- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. -- **关闭** -索引人可以在 1 个纪元过去后自由关闭一个分配([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) ,或者他们的索引人代理将在**maxAllocationEpochs** (当前为 28 天)之后自动关闭该分配。 当一个分配以有效的索引证明(POI) 关闭时,他们的索引奖励将被分配给索引人及其委托人(参见下面的"奖励是如何分配的?"以了解更多)。 +- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). -- **完成** - 一旦一个分配被关闭,就会有一个争议期,之后该分配被认为是 **最终确定**的,它的查询费返利可以被申领(claim())。 索引人代理监视网络以检测**最终**分配,如果它们高于可配置(和可选)阈值--**—-allocation-claim-threshold**,则声明它们。 +- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. -- **申领** - 分配的最终状态;它已经完成了作为活跃分配的过程,所有符合条件的奖励已经分配完毕,其查询费返利也已申领。 +- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. From 2badd32b4eafec23e6686369c050c1bf8edce64f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:29 -0500 Subject: [PATCH 089/241] New translations indexing.mdx (Vietnamese) --- pages/vi/indexing.mdx | 715 +++++++++++++++--------------------------- 1 file changed, 256 insertions(+), 459 deletions(-) diff --git a/pages/vi/indexing.mdx b/pages/vi/indexing.mdx index b543436f0049..090b1be2b226 100644 --- a/pages/vi/indexing.mdx +++ b/pages/vi/indexing.mdx @@ -4,47 +4,47 @@ title: Indexer import { Difficulty } from '@/components' -Indexer là những người vận hành node (node operator) trong Mạng The Graph có stake Graph Token (GRT) để cung cấp các dịch vụ indexing và xử lý truy vấn. Indexers kiếm được phí truy vấn và phần thưởng indexing cho các dịch vụ của họ. Họ cũng kiếm được tiền từ Rebate Pool (Pool Hoàn phí) được chia sẻ với tất cả những người đóng góp trong mạng tỷ lệ thuận với công việc của họ, tuân theo Chức năng Rebate Cobbs-Douglas. +Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. -GRT được stake trong giao thức sẽ phải trải qua một khoảng thời gian chờ "tan băng" (thawing period) và có thể bị cắt nếu Indexer có ác ý và cung cấp dữ liệu không chính xác cho các ứng dụng hoặc nếu họ index không chính xác. Indexer cũng có thể được ủy quyền stake từ Delegator, để đóng góp vào mạng. +GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. -Indexer chọn các subgraph để index dựa trên tín hiệu curation của subgraph, trong đó Curator stake GRT để chỉ ra subgraph nào có chất lượng cao và cần được ưu tiên. Bên tiêu dùng (ví dụ: ứng dụng) cũng có thể đặt các tham số (parameter) mà Indexer xử lý các truy vấn cho các subgraph của họ và đặt các tùy chọn cho việc định giá phí truy vấn. +Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. -## CÂU HỎI THƯỜNG GẶP +## FAQ -### Lượng stake tối thiểu cần thiết để trở thành một indexer trên mạng là bao nhiêu? +### What is the minimum stake required to be an indexer on the network? -Lượng stake tối thiểu cho một indexer hiện được đặt là 100K GRT. +The minimum stake for an indexer is currently set to 100K GRT. -### Các nguồn doanh thu cho indexer là gì? +### What are the revenue streams for an indexer? -**Hoàn phí truy vấn** - Thanh toán cho việc phục vụ các truy vấn trên mạng. Các khoản thanh toán này được dàn xếp thông qua các state channel giữa indexer và cổng. Mỗi yêu cầu truy vấn từ một cổng chứa một khoản thanh toán và phản hồi tương ứng là bằng chứng về tính hợp lệ của kết quả truy vấn. +**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Phần thưởng Indexing** - Được tạo ra thông qua lạm phát trên toàn giao thức hàng năm 3%, phần thưởng indexing được phân phối cho các indexer đang lập chỉ mục các triển khai subgraph cho mạng lưới. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. -### Phần thưởng được phân phối như thế nào? +### How are rewards distributed? -Phần thưởng Indexing đến từ lạm phát giao thức được đặt thành 3% phát hành hàng năm. Chúng được phân phối trên các subgraph dựa trên tỷ lệ của tất cả các tín hiệu curation trên mỗi subgraph, sau đó được phân phối theo tỷ lệ cho các indexers dựa trên số stake được phân bổ của họ trên subgraph đó. **Việc phân bổ phải được kết thúc với bằng chứng lập chỉ mục (proof of indexing - POI) hợp lệ đáp ứng các tiêu chuẩn do điều lệ trọng tài đặt ra để đủ điều kiện nhận phần thưởng** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -Nhiều công cụ đã được cộng đồng tạo ra để tính toán phần thưởng; bạn sẽ tìm thấy một bộ sưu tập của chúng được sắp xếp trong [Bộ sưu tập Hướng dẫn cộng đồng](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Bạn cũng có thể tìm thấy danh sách cập nhật mới nhất các công cụ trong các kênh #delegators và #indexers trên [server Discord](https://discord.gg/vtvv7FP). +Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). -### Bằng chứng lập chỉ mục (proof of indexing - POI) là gì? +### What is a proof of indexing (POI)? -POI được sử dụng trong mạng để xác minh rằng một indexer đang lập chỉ mục các subgraph mà họ đã phân bổ. POI cho khối đầu tiên của epoch hiện tại phải được gửi khi kết thúc phân bổ cho phân bổ đó để đủ điều kiện nhận phần thưởng indexing. POI cho một khối là một thông báo cho tất cả các giao dịch lưu trữ thực thể để triển khai một subgraph cụ thể lên đến và bao gồm khối đó. +POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. -### Khi nào Phần thưởng indexing được phân phối? +### When are indexing rewards distributed? -Việc phân bổ liên tục tích lũy phần thưởng khi chúng đang hoạt động. Phần thưởng được thu thập bởi các indexer và phân phối bất cứ khi nào việc phân bổ của họ bị đóng lại. Điều đó xảy ra theo cách thủ công, bất cứ khi nào indexer muốn buộc đóng chúng hoặc sau 28 epoch, delegator có thể đóng phân bổ cho indexer, nhưng điều này dẫn đến không có phần thưởng nào được tạo ra. 28 epoch là thời gian tồn tại của phân bổ tối đa (hiện tại, một epoch kéo dài trong ~ 24 giờ). +Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). -### Có thể giám sát phần thưởng indexer đang chờ xử lý không? +### Can pending indexer rewards be monitored? -Hợp đồng RewardsManager có có một chức năng [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) chỉ đọc có thể được sử dụng để kiểm tra phần thưởng đang chờ để phân bổ cụ thể. +The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. -Nhiều trang tổng quan (dashboard) do cộng đồng tạo bao gồm các giá trị phần thưởng đang chờ xử lý và bạn có thể dễ dàng kiểm tra chúng theo cách thủ công bằng cách làm theo các bước sau: +Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Truy vấn [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) để nhận ID cho tất cả phần phân bổ đang hoạt động: +1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -60,319 +60,135 @@ query indexerAllocations { } ``` -Sử dụng Etherscan để gọi `getRewards()`: - -- Điều hướng đến [giao diện Etherscan đến hợp đồng Rewards](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) - -* Để gọi `getRewards()`: - - Mở rộng **10. getRewards** thả xuống. - - Nhập **allocationID** trong đầu vào. - - Nhấn **Nút** Truy vấn. - -### Tranh chấp là gì và tôi có thể xem chúng ở đâu? - -Các truy vấn và phần phân bổ của Indexer đều có thể bị tranh chấp trên The Graph trong thời gian tranh chấp. Thời hạn tranh chấp khác nhau, tùy thuộc vào loại tranh chấp. Truy vấn / chứng thực có cửa sổ tranh chấp 7 epoch (kỷ nguyên), trong khi phần phân bổ có 56 epoch. Sau khi các giai đoạn này trôi qua, không thể mở các tranh chấp đối với phần phân bổ hoặc truy vấn. Khi một tranh chấp được mở ra, các Fisherman yêu cầu một khoản stake tối thiểu là 10.000 GRT, sẽ bị khóa cho đến khi tranh chấp được hoàn tất và giải pháp đã được đưa ra. Fisherman là bất kỳ người tham gia mạng nào mà đã mở ra tranh chấp. - -Tranh chấp có **ba** kết quả có thể xảy ra, phần tiền gửi của Fisherman cũng vậy. - -- Nếu tranh chấp bị từ chối, GRT do Fisherman gửi sẽ bị đốt, và Indexer tranh chấp sẽ không bị phạt cắt giảm (slashed). -- Nếu tranh chấp được giải quyết dưới dạng hòa, tiền gửi của Fisherman sẽ được trả lại, và Indexer bị tranh chấp sẽ không bị phạt cắt giảm (slashed). -- Nếu tranh chấp được chấp nhận, lượng GRT do Fisherman đã gửi sẽ được trả lại, Indexer bị tranh chấp sẽ bị cắt và Fisherman sẽ kiếm được 50% GRT đã bị phạt cắt giảm (slashed). - -Tranh chấp có thể được xem trong giao diện người dùng trong trang hồ sơ của Indexer trong mục `Tranh chấp`. - -### Các khoản hoàn phí truy vấn là gì và chúng được phân phối khi nào? - -Phí truy vấn được cổng thu thập bất cứ khi nào một phần phân bổ được đóng và được tích lũy trong pool hoàn phí truy vấn của subgraph. Pool hoàn phí được thiết kế để khuyến khích Indexer phân bổ stake theo tỷ lệ thô với số phí truy vấn mà họ kiếm được cho mạng. Phần phí truy vấn trong pool được phân bổ cho một indexer cụ thể được tính bằng cách sử dụng Hàm Sản xuất Cobbs-Douglas; số tiền được phân phối cho mỗi indexer là một chức năng của phần đóng góp của họ cho pool và việc phân bổ stake của họ trên subgraph. - -Khi một phần phân bổ đã được đóng và thời gian tranh chấp đã qua, indexer sẽ có thể nhận các khoản hoàn phí. Khi yêu cầu, các khoản hoàn phí truy vấn được phân phối cho indexer và delegator của họ dựa trên mức cắt giảm phí truy vấn và tỷ lệ pool ủy quyền (delegation). - -### Cắt giảm phí truy vấn và cắt giảm phần thưởng indexing là gì? - -Giá trị `queryFeeCut` và `indexingRewardCut` là các tham số delegation mà Indexer có thể đặt cùng với cooldownBlocks để kiểm soát việc phân phối GRT giữa indexer và delegator của họ. Xem các bước cuối cùng trong [Staking trong Giao thức](/indexing#stake-in-the-protocol) để được hướng dẫn về cách thiết lập các tham số delegation. - -- **queryFeeCut** - % hoàn phí truy vấn được tích lũy trên một subgraph sẽ được phân phối cho indexer. Nếu thông số này được đặt là 95%, indexer sẽ nhận được 95% của pool hoàn phí truy vấn khi một phần phân bổ được yêu cầu với 5% còn lại sẽ được chuyển cho delegator. - -- **indexingRewardCut** - % phần thưởng indexing được tích lũy trên một subgraph sẽ được phân phối cho indexer. Nếu thông số này được đặt là 95%, indexer sẽ nhận được 95% của pool phần thưởng indexing khi một phần phân bổ được đóng và các delegator sẽ chia 5% còn lại. - -### Làm thế nào để indexer biết những subgraph nào cần index? - -Indexer có thể tự phân biệt bản thân bằng cách áp dụng các kỹ thuật nâng cao để đưa ra quyết định index subgraph nhưng để đưa ra ý tưởng chung, chúng ta sẽ thảo luận một số số liệu chính được sử dụng để đánh giá các subgraph trong mạng: - -- **Tín hiệu curation** - Tỷ lệ tín hiệu curation mạng được áp dụng cho một subgraph cụ thể là một chỉ báo tốt về mức độ quan tâm đến subgraph đó, đặc biệt là trong giai đoạn khởi động khi khối lượng truy vấn đang tăng lên. - -- **Phí truy vấn đã thu** - Dữ liệu lịch sử về khối lượng phí truy vấn được thu thập cho một subgraph cụ thể là một chỉ báo tốt về nhu cầu trong tương lai. - -- **Số tiền được stake** - Việc theo dõi hành vi của những indexer khác hoặc xem xét tỷ lệ tổng stake được phân bổ cho subgraph cụ thể có thể cho phép indexer giám sát phía nguồn cung cho các truy vấn subgraph để xác định các subgraph mà mạng đang thể hiện sự tin cậy hoặc các subgraph có thể cho thấy nhu cầu nguồn cung nhiều hơn. - -- **Subgraph không có phần thưởng indexing** - Một số subgraph không tạo ra phần thưởng indexing chủ yếu vì chúng đang sử dụng các tính năng không được hỗ trợ như IPFS hoặc vì chúng đang truy vấn một mạng khác bên ngoài mainnet. Bạn sẽ thấy một thông báo trên một subgraph nếu nó không tạo ra phần thưởng indexing. - -### Có các yêu cầu gì về phần cứng (hardware)? - -
    -
  • - Nhỏ - Đủ để bắt đầu index một số subgraph, có thể sẽ cần được mở rộng. -
  • -
  • - Tiêu chuẩn - Thiết lập mặc định, đây là những gì được sử dụng trong bản kê khai (manifest) triển khai mẫu - k8s/terraform. -
  • -
  • - Trung bình - Công cụ indexing production hỗ trợ 100 đồ subgraph và 200-500 yêu cầu mỗi giây. -
  • -
  • - Lớn - Được chuẩn bị để index tất cả các subgraph hiện đang được sử dụng và phục vụ các yêu cầu cho lưu lượng - truy cập liên quan. -
  • -
- -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Thiết lập -
Postgres
-
(CPUs)
-
-
Postgres
-
(bộ nhớ tính bằng GB)
-
-
Postgres
-
(đĩa tính bằng TBs)
-
-
VMs
-
(CPUs)
-
-
VMs
-
(bộ nhớ tính bằng GB)
-
Nhỏ481416
Tiêu chuẩn83011248
Trung bình166423264
Lớn724683,548184
-
- -### Một số biện pháp phòng ngừa bảo mật cơ bản mà indexer nên thực hiện là gì? - -- **Ví Operator** - Thiết lập ví của operator là một biện pháp phòng ngừa quan trọng vì nó cho phép indexer duy trì sự tách biệt giữa các khóa kiểm soát stake của họ và những khóa kiểm soát hoạt động hàng ngày. Xem [Stake trong Giao thức](/indexing#stake-in-the-protocol) để được hướng dẫn. - -- **Tường lửa** - Chỉ dịch vụ indexer cần được hiển thị công khai và cần đặc biệt chú ý đến việc khóa các cổng quản trị và quyền truy cập cơ sở dữ liệu: điểm cuối The Graph Node JSON-RPC (cổng mặc định: 8030), điểm cuối API quản lý indexer (cổng mặc định: 18000), và điểm cuối cơ sở dữ liệu Postgres (cổng mặc định: 5432) không được để lộ. - -## Cơ sở hạ tầng - -Tại trung tâm của cơ sở hạ tầng của indexer là Graph Node theo dõi Ethereum, trích xuất và tải dữ liệu theo định nghĩa subgraph và phục vụ nó như một [GraphQL API](/about/introduction#how-the-graph-works). Graph Node cần được kết nối với điểm cuối node Ethereum EVM và node IPFS để tìm nguồn cung cấp dữ liệu; một cơ sở dữ liệu PostgreSQL cho kho lưu trữ của nó; và các thành phần indexer tạo điều kiện cho các tương tác của nó với mạng. - -- **Cơ sở dữ liệu PostgreSQLPostgreSQL** - Kho lưu trữ chính cho Graph Node, đây là nơi lưu trữ dữ liệu subgraph. Dịch vụ indexer và đại lý cũng sử dụng cơ sở dữ liệu để lưu trữ dữ liệu kênh trạng thái (state channel), mô hình chi phí và quy tắc indexing. - -- **Điểm cuối Ethereum** - Một điểm cuối cho thấy API Ethereum JSON-RPC. Điều này có thể ở dạng một ứng dụng khách Ethereum duy nhất hoặc nó có thể là một thiết lập phức tạp hơn để tải số dư trên nhiều máy khách. Điều quan trọng cần lưu ý là các subgraph nhất định sẽ yêu cầu các khả năng cụ thể của ứng dụng khách Ethereum như chế độ lưu trữ và API truy tìm. - -- **IPFS node (phiên bản nhỏ hơn 5)** - Siêu dữ liệu triển khai subgraph được lưu trữ trên mạng IPFS. Node The Graph chủ yếu truy cập vào node IPFS trong quá trình triển khai subgraph để tìm nạp tệp kê khai (manifest) subgraph và tất cả các tệp được liên kết. Indexers mạng lưới không cần lưu trữ node IPFS của riêng họ, một node IPFS cho mạng lưới được lưu trữ tại https://ipfs.network.thegraph.com. - -- **Dịch vụ Indexer** - Xử lý tất cả các giao tiếp bên ngoài được yêu cầu với mạng. Chia sẻ các mô hình chi phí và trạng thái indexing, chuyển các yêu cầu truy vấn từ các cổng đến Node The Graph và quản lý các khoản thanh toán truy vấn qua các kênh trạng thái với cổng. - -- **Đại lý Indexer ** - Tạo điều kiện thuận lợi cho các tương tác của Indexer trên blockchain bao gồm những việc như đăng ký trên mạng lưới, quản lý triển khai subgraph đối với Node The Graph của nó và quản lý phân bổ. Máy chủ số liệu Prometheus - Các thành phần Node The Graph và Indexer ghi các số liệu của chúng vào máy chủ số liệu. - -Lưu ý: Để hỗ trợ mở rộng quy mô nhanh, bạn nên tách các mối quan tâm về truy vấn và indexing giữa các nhóm node khác nhau: node truy vấn và node index. - -### Tổng quan về các cổng - -> **Quan trọng**: Hãy cẩn thận về việc để lộ các cổng 1 cách công khai - **cổng quản lý** nên được giữ kín. Điều này bao gồm JSON-RPC Node The Graph và các điểm cuối quản lý indexer được trình bày chi tiết bên dưới. +Use Etherscan to call `getRewards()`: + +- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) + +* To call `getRewards()`: + - Expand the **10. getRewards** dropdown. + - Enter the **allocationID** in the input. + - Click the **Query** button. + +### What are disputes and where can I view them? + +Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. + +Disputes have **three** possible outcomes, so does the deposit of the Fishermen. + +- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. +- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. +- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. + +Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. + +### What are query fee rebates and when are they distributed? + +Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. + +Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. + +### What is query fee cut and indexing reward cut? + +The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. + +- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. + +- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. + +### How do indexers know which subgraphs to index? + +Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: + +- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. + +- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. + +- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. + +- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. + +### What are the hardware requirements? + +- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. +- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. + +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | + +### What are some basic security precautions an indexer should take? + +- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. + +- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. + +## Infrastructure + +At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. + +- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. + +- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. + +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. + +- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. + +- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. + +Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. + +### Ports overview + +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. #### Graph Node -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CổngMục đíchTuyếnĐối số CLI - Biến
Môi trường -
8000 - GraphQL HTTP server
- (dành cho các truy vấn subgraph) -
- /subgraphs/id/... -
-
- /subgraphs/name/.../... -
--http-port-
8001 - GraphQL WS
(Dành cho đăng ký subgraph) -
- /subgraphs/id/... -
-
- /subgraphs/name/.../... -
--ws-port-
8020 - JSON-RPC
- (để quản lý triển khai) -
/--admin-port-
8030API trạng thái indexing subgraph/graphql--index-node-port-
8040Số liệu Prometheus/metrics-metrics-port-
-
- -#### Dịch vụ Indexer - -
- - - - - - - - - - - - - - - - - - - - - - - - - - -
CổngMục đíchTuyếnĐối số CLIBiến Môi trường
7600 - GraphQL HTTP server
(Dành cho các truy vấn subgraph có trả phí) -
- /subgraphs/id/... -
- /status -
- /channel-messages-inbox -
--portINDEXER_SERVICE_PORT
7300Số liệu Prometheus/metrics--metrics-port-
-
- -#### Đại lý Indexer - -
- - - - - - - - - - - - - - - - - - - -
CổngMục đíchTuyếnĐối số CLIBiến Môi trường
8000API quản lý Indexer/--indexer-management-portINDEXER_AGENT_INDEXER_MANAGEMENT_PORT
-
- -### Thiết lập cơ sở hạ tầng máy chủ bằng Terraform trên Google Cloud - -#### Cài đặt điều kiện tiên quyết +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | + +#### Indexer Service + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | + +#### Indexer Agent + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | +| 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | + +### Setup server infrastructure using Terraform on Google Cloud + +#### Install prerequisites - Google Cloud SDK -- Công cụ dòng lệnh Kubectl +- Kubectl command line tool - Terraform -#### Tạo một dự án Google Cloud +#### Create a Google Cloud Project -- Sao chép hoặc điều hướng đến kho lưu trữ (repository) của indexer. +- Clone or navigate to the indexer repository. -- Điều hướng đến thư mục ./terraform, đây là nơi tất cả các lệnh sẽ được thực thi. +- Navigate to the ./terraform directory, this is where all commands should be executed. ```sh -cd địa hình +cd terraform ``` -- Xác thực với Google Cloud và tạo một dự án mới. +- Authenticate with Google Cloud and create a new project. ```sh gcloud auth login @@ -380,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Sử dụng \[billing page\](billing page) của Google Cloud Consolde để cho phép thanh toán cho dự án mới. +- Use the Google Cloud Console's billing page to enable billing for the new project. -- Tạo một cấu hình Google Cloud. +- Create a Google Cloud configuration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -392,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Bật các API Google Cloud được yêu cầu. +- Enable required Google Cloud APIs. ```sh gcloud services enable compute.googleapis.com @@ -401,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Tạo một tài khoản dịch vụ. +- Create a service account. ```sh svc_name= @@ -419,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Bật tính năng ngang hàng (peering) giữa cơ sở dữ liệu và cụm Kubernetes sẽ được tạo trong bước tiếp theo. +- Enable peering between database and Kubernetes cluster that will be created in the next step. ```sh gcloud compute addresses create google-managed-services-default \ @@ -433,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Tạo tệp cấu hình terraform tối thiểu (cập nhật nếu cần). +- Create minimal terraform configuration file (update as needed). ```sh indexer= @@ -444,24 +260,24 @@ database_password = "" EOF ``` -#### Sử dụng Terraform để tạo cơ sở hạ tầng +#### Use Terraform to create infrastructure -Trước khi chạy bất kỳ lệnh nào, hãy đọc qua [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) và tạo một tệp `terraform.tfvars` trong thư mục này (hoặc sửa đổi thư mục chúng ta đã tạo ở bước vừa rồi). Đối với mỗi biến mà bạn muốn ghi đè mặc định hoặc nơi bạn cần đặt giá trị, hãy nhập cài đặt vào `terraform.tfvars`. +Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. -- Chạy các lệnh sau để tạo cơ sở hạ tầng. +- Run the following commands to create the infrastructure. ```sh -# Cài đặt các Plugins được yêu cầu +# Install required plugins terraform init -# Xem kế hoạch cho các tài nguyên sẽ được tạo +# View plan for resources to be created terraform plan -# Tạo tài nguyên (dự kiến mất đến 30 phút) +# Create the resources (expect it to take up to 30 minutes) terraform apply ``` -Tải xuống thông tin đăng nhập cho cụm mới vào `~/.kube/config` và đặt nó làm ngữ cảnh mặc định của bạn. +Download credentials for the new cluster into `~/.kube/config` and set it as your default context. ```sh gcloud container clusters get-credentials $indexer @@ -469,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Tạo các thành phần Kubernetes cho indexer +#### Creating the Kubernetes components for the indexer -- Sao chép thư mục `k8s/overlays` đến một thư mục mới `$dir,` và điều chỉnh `bases` vào trong `$dir/kustomization.yaml` để nó chỉ đến thư mục `k8s/base`. +- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. -- Đọc qua tất cả các tệp trong `$dir` và điều chỉnh bất kỳ giá trị nào như được chỉ ra trong nhận xét. +- Read through all the files in `$dir` and adjust any values as indicated in the comments. -Triển khai tất cả các tài nguyên với `kubectl apply -k $dir`. +Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) là một triển khai Rust mã nguồn mở mà sự kiện tạo nguồn cho blockchain Ethereum để cập nhật một cách xác định kho dữ liệu có thể được truy vấn thông qua điểm cuối GraphQL. Các nhà phát triển sử dụng các subgraph để xác định subgraph của họ và một tập hợp các ánh xạ để chuyển đổi dữ liệu có nguồn gốc từ blockchain và Graph Node xử lý việc đồng bộ hóa toàn bộ chain, giám sát các khối mới và phân phát nó qua một điểm cuối GraphQL. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. -#### Bắt đầu từ nguồn +#### Getting started from source -#### Cài đặt điều kiện tiên quyết +#### Install prerequisites - **Rust** @@ -491,15 +307,15 @@ Triển khai tất cả các tài nguyên với `kubectl apply -k $dir`. - **IPFS** -- **Yêu cầu bổ sung cho người dùng Ubuntu** - Để chạy Graph Node trên Ubuntu, có thể cần một số gói bổ sung. +- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Cài đặt +#### Setup -1. Khởi động máy chủ cơ sở dữ liệu PostgreSQL +1. Start a PostgreSQL database server ```sh initdb -D .postgres @@ -507,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Nhân bản [Graph Node](https://github.com/graphprotocol/graph-node) repo và xây dựng nguồn bằng cách chạy `cargo build` +2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` -3. Bây giờ tất cả các phụ thuộc đã được thiết lập, hãy khởi động Graph Node: +3. Now that all the dependencies are setup, start the Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -518,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Bắt đầu sử dụng Docker +#### Getting started using Docker -#### Điều kiện tiên quyết +#### Prerequisites -- **Ethereum node** - Theo mặc định, thiết lập soạn thư docker sẽ sử dụng mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) để kết nối với node Ethereum trên máy chủ của bạn. Bạn có thể thay thế tên và url mạng này bằng cách cập nhật `docker-compose.yaml`. +- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. -#### Cài đặt +#### Setup -1. Nhân bản Graph Node và điều hướng đến thư mục Docker: +1. Clone Graph Node and navigate to the Docker directory: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. Chỉ dành cho người dùng linux - Sử dụng địa chỉ IP máy chủ thay vì `host.docker.internal` trong `docker-compose.yaml` bằng cách sử dụng tập lệnh bao gồm: +2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: ```sh ./setup.sh ``` -3. Bắt đầu một Graph Node cục bộ sẽ kết nối với điểm cuối Ethereum của bạn: +3. Start a local Graph Node that will connect to your Ethereum endpoint: ```sh docker-compose up ``` -### Các thành phần của Indexer +### Indexer components -Để tham gia thành công vào mạng này, đòi hỏi sự giám sát và tương tác gần như liên tục, vì vậy chúng tôi đã xây dựng một bộ ứng dụng Typescript để tạo điều kiện cho Indexer tham gia mạng. Có ba thành phần của trình indexer: +To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: -- **Đại ly Indexer** - Đại lý giám sát mạng và cơ sở hạ tầng của chính Indexer và quản lý việc triển khai subgraph nào được lập chỉ mục và phân bổ trên chain và số lượng được phân bổ cho mỗi. +- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. -- **Dịch vụ Indexer** - Thành phần duy nhất cần được hiển thị bên ngoài, dịch vụ chuyển các truy vấn subgraph đến graph node, quản lý các kênh trạng thái cho các khoản thanh toán truy vấn, chia sẻ thông tin ra quyết định quan trọng cho máy khách như các cổng. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. -- **Indexer CLI** - Giao diện dòng lệnh để quản lý đại lý indexer. Nó cho phép indexer quản lý các mô hình chi phí và các quy tắc lập chỉ mục. +- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. -#### Bắt đầu +#### Getting started -Đại lý indexer và dịch vụ indexer nên được đặt cùng vị trí với cơ sở hạ tầng Graph Node của bạn. Có nhiều cách để thiết lập môi trường thực thi ảo cho bạn các thành phần của indexer; ở đây chúng tôi sẽ giải thích cách chạy chúng trên baremetal bằng cách sử dụng gói hoặc nguồn NPM hoặc thông qua kubernetes và docker trên Google Cloud Kubernetes Engine. Nếu các ví dụ thiết lập này không được dịch tốt sang cơ sở hạ tầng của bạn, có thể sẽ có một hướng dẫn cộng đồng để tham khảo, hãy tìm hiểu thêm tại [Discord](https://thegraph.com/discord)! Hãy nhớ [stake trong giao thứcl](/indexing#stake-in-the-protocol) trước khi bắt đầu các thành phần indexer của bạn! +The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! -#### Từ các gói NPM +#### From NPM packages ```sh npm install -g @graphprotocol/indexer-service @@ -582,17 +398,17 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### Từ nguồn +#### From source ```sh -# Từ Repo root directory +# From Repo root directory yarn -# Dịch vụ Indexer +# Indexer Service cd packages/indexer-service ./bin/graph-indexer-service start ... -# Đại lý Indexer +# Indexer agent cd packages/indexer-agent ./bin/graph-indexer-service start ... @@ -602,48 +418,48 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Sử dụng docker +#### Using docker -- Kéo hình ảnh từ sổ đăng ký +- Pull images from the registry ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Hoặc xây dựng hình ảnh cục bộ từ nguồn +Or build images locally from source ```sh -# Dịch vụ Indexer +# Indexer service docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-service \ -t indexer-service:latest \ -# Đại lý Indexer +# Indexer agent docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-agent \ -t indexer-agent:latest \ ``` -- Chạy các thành phần +- Run the components ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**LƯU Ý**: Sau khi khởi động vùng chứa, dịch vụ indexer sẽ có thể truy cập được tại [http://localhost:7600](http://localhost:7600) và đại lý indexer sẽ hiển thị API quản lý indexer tại [http://localhost:18000/](http://localhost:18000/). +**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). -#### Sử dụng K8s and Terraform +#### Using K8s and Terraform -Xem phần [Thiết lập Cơ sở Hạ tầng Máy chủ bằng Terraform trên Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) +See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section -#### Sử dụng +#### Usage -> **LƯU Ý**: Tất cả các biến cấu hình thời gian chạy có thể được áp dụng dưới dạng tham số cho lệnh khi khởi động hoặc sử dụng các biến môi trường của định dạng `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). -#### Đại lý Indexer +#### Indexer agent ```sh graph-indexer-agent start \ @@ -671,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Dịch vụ Indexer +#### Indexer service ```sh SERVER_HOST=localhost \ @@ -699,42 +515,42 @@ graph-indexer-service start \ #### Indexer CLI -Indexer CLI là một plugin dành cho [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) có thể truy cập trong terminal tại `graph indexer`. +The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Quản lý Indexer bằng cách sử dụng indexer CLI +#### Indexer management using indexer CLI -Đại lý indexer cần đầu vào từ một indexer để tự động tương tác với mạng thay mặt cho indexer. Cơ chế để xác định hành vi của đại lý indexer là **các quy tắc indexing**. Sử dụng **các quy tắc indexing** một indexer có thể áp dụng chiến lược cụ thể của họ để chọn các subgraph để lập chỉ mục và phục vụ các truy vấn. Các quy tắc được quản lý thông qua API GraphQL do đại lý phân phối và được gọi là API Quản lý Indexer. Công cụ được đề xuất để tương tác với **API Quản lý Indexer** là **Indexer CLI**, một extension cho **Graph CLI**. +The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. -#### Sử dụng +#### Usage -**Indexer CLI** kết nối với đại lý indexer, thường thông qua chuyển tiếp cổng (port-forwarding), vì vậy CLI không cần phải chạy trên cùng một máy chủ hoặc cụm. Để giúp bạn bắt đầu và cung cấp một số ngữ cảnh, CLI sẽ được mô tả ngắn gọn ở đây. +The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. -- `graph indexer connect ` - Kết nối với API quản lý indexer. Thông thường, kết nối với máy chủ được mở thông qua chuyển tiếp cổng, vì vậy CLI có thể dễ dàng vận hành từ xa. (Ví dụ: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Lấy một hoặc nhiều quy tắc indexing bằng cách sử dụng `all` như là `` để lấy tất cả các quy tắc, hoặc `global` để lấy các giá trị mặc định chung. Một đối số bổ sung`--merged` có thể được sử dụng để chỉ định rằng các quy tắc triển khai cụ thể được hợp nhất với quy tắc chung. Đây là cách chúng được áp dụng trong đại lý indexer. +- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. -- `graph indexer rules set [options] ...` - Đặt một hoặc nhiều quy tắc indexing. +- `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Bắt đầu indexing triển khai subgraph nếu có và đặt `decisionBasis` thành `always`, để đại lý indexer sẽ luôn chọn lập chỉ mục nó. Nếu quy tắc chung được đặt thành luôn thì tất cả các subgraph có sẵn trên mạng sẽ được lập chỉ mục. +- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. -- `graph indexer rules stop [options] ` - Ngừng indexing triển khai và đặt `decisionBasis` không bao giờ, vì vậy nó sẽ bỏ qua triển khai này khi quyết định triển khai để lập chỉ mục. +- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. -- `graph indexer rules maybe [options] ` — Đặt `thedecisionBasis` cho một triển khai thành `rules`, để đại lý indexer sẽ sử dụng các quy tắc indexing để quyết định có index việc triển khai này hay không. +- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. -Tất cả các lệnh hiển thị quy tắc trong đầu ra có thể chọn giữa các định dạng đầu ra được hỗ trợ (`table`, `yaml`, and `json`) bằng việc sử dụng đối số `-output`. +All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. -#### Các quy tắc indexing +#### Indexing rules -Các quy tắc Indexing có thể được áp dụng làm mặc định chung hoặc cho các triển khai subgraph cụ thể bằng cách sử dụng ID của chúng. Các trường `deployment` và `decisionBasis` là bắt buộc, trong khi tất cả các trường khác là tùy chọn. Khi quy tắc lập chỉ mục có `rules` như là `decisionBasis`, thì đại lý indexer sẽ so sánh các giá trị ngưỡng không null trên quy tắc đó với các giá trị được tìm nạp từ mạng để triển khai tương ứng. Nếu triển khai subgraph có các giá trị trên (hoặc thấp hơn) bất kỳ ngưỡng nào thì nó sẽ được chọn để index. +Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -Ví dụ: nếu quy tắc chung có `minStake` của **5** (GRT), bất kỳ triển khai subgraph nào có hơn 5 (GRT) stake được phân bổ cho nó sẽ được index. Các quy tắc ngưỡng bao gồm `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, và `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. -Mô hình dữ liệu: +Data model: ```graphql type IndexingRule { @@ -757,117 +573,98 @@ IndexingDecisionBasis { } ``` -#### Các mô hình chi phí +#### Cost models -Mô hình chi phí cung cấp định giá động cho các truy vấn dựa trên thuộc tính thị trường và truy vấn. Dịch vụ Indexer chia sẻ mô hình chi phí với các cổng cho mỗi subgraph mà chúng dự định phản hồi các truy vấn. Đến lượt mình, các cổng sử dụng mô hình chi phí để đưa ra quyết định lựa chọn indexer cho mỗi truy vấn và để thương lượng thanh toán với những indexer đã chọn. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. #### Agora -Ngôn ngữ Agora cung cấp một định dạng linh hoạt để khai báo các mô hình chi phí cho các truy vấn. Mô hình giá Agora là một chuỗi các câu lệnh thực thi theo thứ tự cho mỗi truy vấn cấp cao nhất trong một truy vấn GraphQL. Đối với mỗi truy vấn cấp cao nhất, câu lệnh đầu tiên phù hợp với nó xác định giá cho truy vấn đó. +The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. -Một câu lệnh bao gồm một vị từ (predicate), được sử dụng để đối sánh các truy vấn GraphQL và một biểu thức chi phí mà khi được đánh giá sẽ xuất ra chi phí ở dạng GRT thập phân. Các giá trị ở vị trí đối số được đặt tên của một truy vấn có thể được ghi lại trong vị từ và được sử dụng trong biểu thức. Các Globals có thể được đặt và thay thế cho các phần giữ chỗ trong một biểu thức. +A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. -Mô hình chi phí mẫu: +Example cost model: ``` -# Câu lệnh này ghi lại giá trị bỏ qua (skip), -# sử dụng biểu thức boolean trong vị từ để khớp với các truy vấn cụ thể sử dụng `skip` -# và một biểu thức chi phí để tính toán chi phí dựa trên giá trị `skip` và SYSTEM_LOAD global +# This statement captures the skip value, +# uses a boolean expression in the predicate to match specific queries that use `skip` +# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# Mặc định này sẽ khớp với bất kỳ biểu thức GraphQL nào. -# Nó sử dụng một Global được thay thế vào biểu thức để tính toán chi phí +# This default will match any GraphQL expression. +# It uses a Global substituted into the expression to calculate cost default => 0.1 * $SYSTEM_LOAD; ``` -Ví dụ truy vấn chi phí bằng cách sử dụng mô hình trên: - -
- - - - - - - - - - - - - - - - - - - - - -
Truy vấnGiá
{ pairs(skip: 5000) { id } }0.5 GRT
{ tokens { symbol } }0.1 GRT
{ pairs(skip: 5000) { id { tokens } symbol } }0.6 GRT
-
- -#### Áp dụng mô hình chi phí - -Các mô hình chi phí được áp dụng thông qua Indexer CLI, chuyển chúng đến API Quản lý Indexer của đại lý indexer để lưu trữ trong cơ sở dữ liệu. Sau đó, Dịch vụ Indexer sẽ nhận chúng và cung cấp các mô hình chi phí tới các cổng bất cứ khi nào họ yêu cầu. +Example query costing using the above model: + +| Query | Price | +| ---------------------------------------------------------------------------- | ------- | +| { pairs(skip: 5000) { id } } | 0.5 GRT | +| { tokens { symbol } } | 0.1 GRT | +| { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | + +#### Applying the cost model + +Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Tương tác với mạng +## Interacting with the network -### Stake trong giao thức +### Stake in the protocol -Các bước đầu tiên để tham gia vào mạng với tư cách là Indexer là phê duyệt giao thức, stake tiền và (tùy chọn) thiết lập địa chỉ operator cho các tương tác giao thức hàng ngày. _ **Lưu ý**: Đối với các mục đích của các hướng dẫn này, Remix sẽ được sử dụng để tương tác hợp đồng, nhưng hãy thoải mái sử dụng công cụ bạn chọn ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), và [MyCrypto](https://www.mycrypto.com/account) là một vài công cụ được biết đến khác)._ +The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ -Khi một indexer đã stake GRT vào giao thức, [các thành phần indexer](/indexing#indexer-components) có thể được khởi động và bắt đầu tương tác của chúng với mạng. +Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. -#### Phê duyệt các token +#### Approve tokens -1. Mở [Remix app](https://remix.ethereum.org/) trong một trình duyệt +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. Trong `File Explorer` tạo một tệp tên **GraphToken.abi** với [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. Với `GraphToken.abi` đã chọn và mở trong trình chỉnh sửa, chuyển sang Deploy (Triển khai) và `Run Transactions` trong giao diện Remix. +3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. -4. Trong môi trường (environment) chọn `Injected Web3` và trong `Account` chọn địa chỉ indexer của bạn. +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. Đặt địa chỉ hợp đồng GraphToken - Dán địa chỉ hợp đồng GraphToken(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) kế bên `At Address` và nhấp vào nút `At address` để áp dụng. +5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. -6. Gọi chức năng `approve(spender, amount)` để phê duyệt hợp đồng Staking. Điền phần `spender` bằng địa chỉ hợp đồng Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) và điền `amount` bằng số token để stake (tính bằng wei). +6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). -#### Stake các token +#### Stake tokens -1. Mở [Remix app](https://remix.ethereum.org/) trong một trình duyệt +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. Trong `File Explorer` tạo một tệp tene **Staking.abi** với Staking ABI. +2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. -3. Với `Staking.abi` đã chọn và mở trong trình chỉnh sửa, chuyển sang `Deploy` và `Run Transactions` trong giao diện Remix. +3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. -4. Trong môi trường (environment) chọn `Injected Web3` và trong `Account` chọn địa chỉ indexer của bạn. +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. Đặt địa chỉ hợp đồng Staking - Dán địa chỉ hợp đồng Staking (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) kế bên `At Address` và nhấp vào nút `At address` để áp dụng. +5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. -6. Gọi lệnh `stake()` để stake GRT vào giao thức. +6. Call `stake()` to stake GRT in the protocol. -7. (Tùy chọn) Indexer có thể chấp thuận một địa chỉ khác làm operator cho cơ sở hạ tầng indexer của họ để tách các khóa kiểm soát tiền khỏi những khóa đang thực hiện các hành động hàng ngày như phân bổ trên các subgraph và phục vụ các truy vấn (có trả tiền). Để đặt operator, hãy gọi lệnh `setOperator()` với địa chỉ operator. +7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Tùy chọn) Để kiểm soát việc phân phối phần thưởng và thu hút delegator một cách chiến lược, indexer có thể cập nhật thông số ủy quyền của họ bằng cách cập nhật indexingRewardCut (phần triệu), queryFeeCut (phần triệu) và cooldownBlocks (số khối). Để làm như vậy, hãy gọi `setDelegationParameters()`. Ví dụ sau đặt queryFeeCut phân phối 95% hoàn phí truy vấn cho indexer và 5% cho delegator, đặt indexingRewardCutto phân phối 60% phần thưởng indexing cho indexer và 40% cho delegator và đặt `thecooldownBlocks` chu kỳ đến 500 khối. +8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) ``` -### Tuổi thọ của một phân bổ +### The life of an allocation -Sau khi được tạo bởi một indexer, một phân bổ lành mạnh sẽ trải qua bốn trạng thái. +After being created by an indexer a healthy allocation goes through four states. -- **Đang hoạt động** - Sau khi phân bổ được tạo trên chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) nó được xem là **đang hoạt động**. Một phần stake của chính indexer và/hoặc stake được ủy quyền được phân bổ cho việc triển khai subgraph, cho phép họ yêu cầu phần thưởng indexing và phục vụ các truy vấn cho việc triển khai subgraph đó. Đại lý indexer quản lý việc tạo phân bổ dựa trên các quy tắc của indexer. +- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. -- **Đã đóng** - Một indexer có thể tự do đóng phân bổ sau khi 1 epoch ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) hoặc đại lý indexer của họ sẽ tự động đóng phân bổ sau **maxAllocationEpochs** (hiện tại 28 ngày). Khi kết thúc phân bổ với bằng chứng hợp lệ về proof of indexing (POI), phần thưởng indexing của họ sẽ được phân phối cho indexer và những delegator của nó (xem "phần thưởng được phân phối như thế nào?" Bên dưới để tìm hiểu thêm). +- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). -- **Hoàn thiện** - Sau khi phân bổ đã bị đóng, sẽ có một khoảng thời gian tranh chấp mà sau đó phân bổ được xem xét là **hoàn thiện** và nó có sẵn các khoản hoàn lại phí truy vấn khả dụng để được yêu cầu (claim()). Đại lý indexer giám sát mạng để phát hiện các phân bổ **hoàn thiện** yêu cầu chúng nếu chúng vượt quá ngưỡng có thể định cấu hình (và tùy chọn), **—-allocation-claim-threshold**. +- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. -- **Đã yêu cầu** - Trạng thái cuối cùng của một phân bổ; nó đã chạy quá trình của nó dưới dạng phân bổ đang hoạt động, tất cả các phần thưởng đủ điều kiện đã được phân phối và các khoản bồi hoàn phí truy vấn của nó đã được yêu cầu. +- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. From 05963d3656a080b34e7c373b9e6fcdafce7404c6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:31 -0500 Subject: [PATCH 090/241] New translations global.json (Spanish) --- pages/es/global.json | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/pages/es/global.json b/pages/es/global.json index a89f8e037622..d829483dff23 100644 --- a/pages/es/global.json +++ b/pages/es/global.json @@ -1,5 +1,17 @@ { - "aboutTheGraph": "Acerca de The Graph", + "language": "Language", + "aboutTheGraph": "About The Graph", "developer": "Desarrollador", - "supportedNetworks": "Redes compatibles" + "supportedNetworks": "Redes admitidas", + "collapse": "Collapse", + "expand": "Expand", + "previous": "Previous", + "next": "Next", + "editPage": "Edit page", + "pageSections": "Page Sections", + "linkToThisSection": "Link to this section", + "technicalLevelRequired": "Technical Level Required", + "notFoundTitle": "Oops! This page was lost in space...", + "notFoundSubtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", + "goHome": "Go Home" } From 108420b58119d3eafb0d6b5e31fdcfb2e6ac353b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:31 -0500 Subject: [PATCH 091/241] New translations global.json (Arabic) --- pages/ar/global.json | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/pages/ar/global.json b/pages/ar/global.json index d96555da2f85..d7e8be465fc7 100644 --- a/pages/ar/global.json +++ b/pages/ar/global.json @@ -1,5 +1,17 @@ { - "aboutTheGraph": "حول The Graph", - "developer": "مطور", - "supportedNetworks": "الشبكات المدعومة" + "language": "Language", + "aboutTheGraph": "About The Graph", + "developer": "المطور", + "supportedNetworks": "الشبكات المدعومة", + "collapse": "Collapse", + "expand": "Expand", + "previous": "Previous", + "next": "Next", + "editPage": "Edit page", + "pageSections": "Page Sections", + "linkToThisSection": "Link to this section", + "technicalLevelRequired": "Technical Level Required", + "notFoundTitle": "Oops! This page was lost in space...", + "notFoundSubtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", + "goHome": "Go Home" } From ceb2038e633b1f5531dbd54f36202153f01cf416 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:32 -0500 Subject: [PATCH 092/241] New translations global.json (Japanese) --- pages/ja/global.json | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/pages/ja/global.json b/pages/ja/global.json index 992768286958..ebb2edd830b6 100644 --- a/pages/ja/global.json +++ b/pages/ja/global.json @@ -1,5 +1,17 @@ { - "aboutTheGraph": "The Graphについて", - "developer": "デベロッパー", - "supportedNetworks": "サポートされているネットワーク" + "language": "Language", + "aboutTheGraph": "About The Graph", + "developer": "ディベロッパー", + "supportedNetworks": "Supported Networks", + "collapse": "Collapse", + "expand": "Expand", + "previous": "Previous", + "next": "Next", + "editPage": "Edit page", + "pageSections": "Page Sections", + "linkToThisSection": "Link to this section", + "technicalLevelRequired": "Technical Level Required", + "notFoundTitle": "Oops! This page was lost in space...", + "notFoundSubtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", + "goHome": "Go Home" } From a09d6b43b080d70ee463faa25da7c44cc8bd33e5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:33 -0500 Subject: [PATCH 093/241] New translations global.json (Chinese Simplified) --- pages/zh/global.json | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/pages/zh/global.json b/pages/zh/global.json index 8111158cc4e7..cf259d6a0432 100644 --- a/pages/zh/global.json +++ b/pages/zh/global.json @@ -1,5 +1,17 @@ { - "aboutTheGraph": "关于 The Graph", - "developer": "开发商", - "supportedNetworks": "支持的网络" + "language": "Language", + "aboutTheGraph": "About The Graph", + "developer": "开发者", + "supportedNetworks": "支持的网络", + "collapse": "Collapse", + "expand": "Expand", + "previous": "Previous", + "next": "Next", + "editPage": "Edit page", + "pageSections": "Page Sections", + "linkToThisSection": "Link to this section", + "technicalLevelRequired": "Technical Level Required", + "notFoundTitle": "Oops! This page was lost in space...", + "notFoundSubtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", + "goHome": "Go Home" } From c6c747b32e0f0ee0fba25309cb28610b437d426d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:34 -0500 Subject: [PATCH 094/241] New translations indexing.mdx (Arabic) --- pages/ar/indexing.mdx | 362 +++++++++++++++++++++--------------------- 1 file changed, 181 insertions(+), 181 deletions(-) diff --git a/pages/ar/indexing.mdx b/pages/ar/indexing.mdx index c28f9bed30b5..0b1896db2749 100644 --- a/pages/ar/indexing.mdx +++ b/pages/ar/indexing.mdx @@ -4,47 +4,47 @@ title: فهرسة (indexing) import { Difficulty } from '@/components' -المفهرسون ( Indexers) هم مشغلي العقد (node) في شبكة TheGraph ويقومون ب staking لتوكن (GRT) من أجل توفير خدمات الفهرسة ( indexing) والاستعلام. المفهرسون(Indexers) يحصلون على رسوم الاستعلام ومكافآت الفهرسة وذلك مقابل خدماتهم. وأيضا يكسبون من مجموعة الخصومات (Rebate Pool) والتي تتم مشاركتها مع جميع المساهمين في الشبكة بما يتناسب مع عملهم ، وفقا ل Cobbs-Douglas Rebate Function. +Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. -يخضع GRT المخزن في البروتوكول لفترة إذابة thawing period وقد يتم شطبه إذا كان المفهرسون ضارون ويقدمون بيانات غير صحيحة للتطبيقات أو إذا قاموا بالفهرسة بشكل غير صحيح. المفهرسون يتم تفويضهم من قبل المفوضين وذلك للمساهمه في الشبكة. +GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. -يختار المفهرسون subgraphs للقيام بالفهرسة بناء على إشارة تنسيق subgraphs ، حيث أن المنسقون يقومون ب staking ل GRT وذلك للإشارة ل Subgraphs عالية الجودة. يمكن أيضا للعملاء (مثل التطبيقات) تعيين بارامترات حيث يقوم المفهرسون بمعالجة الاستعلامات ل Subgraphs وتسعير رسوم الاستعلام. +Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. -## الأسئلة الشائعة +## FAQ -### ما هو الحد الأدنى لتكون مفهرسا على الشبكة؟ +### What is the minimum stake required to be an indexer on the network? -لتكون مفهرسا فإن الحد الأدنى ل Staking هو 100K GRT. +The minimum stake for an indexer is currently set to 100K GRT. -### ما هي مصادر الدخل للمفهرس؟ +### What are the revenue streams for an indexer? -** خصومات رسوم الاستعلام Query fee rebates ** - هي مدفوعات مقابل خدمة الاستعلامات على الشبكة. هذه الأجور تكون بواسطة قناة بين المفهرس والبوابة (gateway). كل طلب استعلام من بوابة يحتوي على دفع ،والرد عليه دليل على صحة نتيجة الاستعلام. +**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -** مكافآت الفهرسة Indexing rewards** - يتم إنشاؤها من خلال تضخم سنوي للبروتوكول بنسبة 3٪ ، ويتم توزيع مكافآت الفهرسة على المفهرسين الذين يقومون بفهرسة ال subgraphs للشبكة. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. -### كيف توزع المكافآت؟ +### How are rewards distributed? -تأتي مكافآت الفهرسة من تضخم البروتوكول والذي تم تعيينه بنسبة 3٪ سنويا. يتم توزيعها عبر subgraphs بناءً على نسبة جميع إشارات التنسيق في كل منها ، ثم يتم توزيعها بالتناسب على المفهرسين بناءً على حصصهم المخصصة على هذا ال subgraph. \*\* يجب إغلاق المخصصة بإثبات صالح للفهرسة (POI) والذي يفي بالمعايير التي حددها ميثاق التحكيم حتى يكون مؤهلاً للحصول على المكافآت. +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -تم إنشاء العديد من الأدوات من قبل المجتمع لحساب المكافآت ؛ ستجد مجموعة منها منظمة في دليل المجتمع. يمكنك أيضا أن تجد قائمة محدثة من الأدوات في قناة #delegators و #indexers على Discord. +Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). -### ما هو إثبات الفهرسة (POI)؟ +### What is a proof of indexing (POI)? -تُستخدم POIs في الشبكة وذلك للتحقق من أن المفهرس يقوم بفهرسة ال subgraphs والتي قد تم تخصيصها. POI للكتلة الأولى من الفترة الحالية تسلم عند إغلاق المخصصة لذلك التخصيص ليكون مؤهلاً لفهرسة المكافآت. كتلة ال POI هي عبارة عن ملخص لجميع معاملات المخزن لنشر subgraph محدد حتى تضمين تلك الكتلة. +POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. -### متى يتم توزيع مكافآت الفهرسة؟ +### When are indexing rewards distributed? -المخصصات تقوم بتجميع المكافآت باستمرار أثناء فاعليتها. يتم جمع المكافآت من قبل المفهرسين وتوزيعها كلما تم إغلاق مخصصاتهم. يحدث هذا إما يدويا عندما يريد المفهرس إغلاقها بالقوة ، أو بعد 28 فترة يمكن للمفوض إغلاق التخصيص للمفهرس ، لكن هذا لا ينتج عنه أي مكافآت. 28 فترة هي أقصى مدة للتخصيص (حاليا، تستمر فترة واحدة لمدة 24 ساعة تقريبًا). +Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). -### هل يمكن مراقبة مكافآت المفهرس المعلقة؟ +### Can pending indexer rewards be monitored? -تشمل العديد من لوحات المعلومات dashboards التي أنشأها المجتمع على قيم المكافآت المعلقة ويمكن التحقق منها بسهولة يدويا باتباع الخطوات التالية: +The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. -استخدم Etherscan لاستدعاء `getRewards()`: +Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. استعلم عن [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) للحصول على IDs لجميع المخصصات النشطة: +1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -60,109 +60,109 @@ query indexerAllocations { } ``` -استخدم Etherscan لاستدعاء `()getRewards`: +Use Etherscan to call `getRewards()`: -- انتقل إلى [ واجهة Etherscan لعقد المكافآت Rewards contract ](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* لاستدعاء ()getRewards: - - قم بتوسيع ال\*\* 10. قائمة getRewards المنسدلة. - - انقر على زر **Query استعلام**. - - الاعتراضات لديها **ثلاث** نتائج محتملة ، وكذلك إيداع ال Fishermen. +* To call `getRewards()`: + - Expand the **10. getRewards** dropdown. + - Enter the **allocationID** in the input. + - Click the **Query** button. -### ما هي الاعتراضات disputes وأين يمكنني عرضها؟ +### What are disputes and where can I view them? -يمكن الاعتراض على استعلامات المفهرس وتخصيصاته على The Graph أثناء فترة الاعتراض dispute. تختلف فترة الاعتراض حسب نوع الاعتراض. تحتوي الاستعلامات / الشهادات Queries/attestations على نافذة اعتراض لـ 7 فترات ، في حين أن المخصصات لها 56 فترة. بعد مرور هذه الفترات ، لا يمكن فتح اعتراضات ضد أي من المخصصات أو الاستعلامات. عند فتح الاعتراض ، يجب على الصيادين Fishermen إيداع على الأقل 10000 GRT ، والتي سيتم حجزها حتى يتم الانتهاء من الاعتراض وتقديم حل. الصيادون Fisherman هم المشاركون في الشبكة الذين يفتحون الاعتراضات. +Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. -يمكنك عرض الاعتراضات من واجهة المستخدم في صفحة ملف تعريف المفهرس وذلك من علامة التبويب `Disputes`. +Disputes have **three** possible outcomes, so does the deposit of the Fishermen. -- إذا تم رفض الاعتراض، فسيتم حرق GRT المودعة من قبل ال Fishermen ، ولن يتم شطب المفهرس المعترض عليه. -- إذا تمت تسوية الاعتراض بالتعادل، فسيتم إرجاع وديعة ال Fishermen ، ولن يتم شطب المفهرس المعترض عليه. -- إذا تم قبول الاعتراض، فسيتم إرجاع GRT التي أودعها الFishermen ، وسيتم شطب المفهرس المعترض عليه وسيكسب Fishermen ال 50٪ من GRT المشطوبة. +- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. +- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. +- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. -يمكن عرض الاعتراضات في واجهة المستخدم في بروفايل المفهرس ضمن علامة التبويب ` Disputes`. +Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. -### ما هي خصومات رسوم الاستعلام ومتى يتم توزيعها؟ +### What are query fee rebates and when are they distributed? -يتم تحصيل رسوم الاستعلام بواسطة البوابة gateway وذلك عندما يتم إغلاق الحصة وتجميعها في خصومات رسوم الاستعلام في ال subgraph. تم تصميم مجموعة الخصومات rebate pool لتشجيع المفهرسين على تخصيص حصة تقريبية لمقدار رسوم الاستعلام التي يكسبونها للشبكة. يتم حساب جزء رسوم الاستعلام في المجموعة التي تم تخصيصها لمفهرس معين وذلك باستخدام دالة Cobbs-Douglas Production ؛ المبلغ الموزع لكل مفهرس يعتمد على مساهماتهم في المجموعة pool وتخصيص حصتهم على ال subgraph. +Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. -بمجرد إغلاق التخصيص ومرور فترة الاعتراض، تكون الخصومات متاحة للمطالبة بها من قبل المفهرس. عند المطالبة ، يتم توزيع خصومات رسوم الاستعلام للمفهرس ومفوضيه بناء على اقتطاع رسوم الاستعلام query fee cut ونسب أسهم التفويض. +Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. -### ما المقصود بqueryFeeCut وindexingRewardCut؟ +### What is query fee cut and indexing reward cut? -قيم ال `queryFeeCut` و `indexingRewardCut` هي بارامترات التفويض التي قد يقوم المفهرس بتعيينها مع cooldownBlocks للتحكم في توزيع GRT بين المفهرس ومفوضيه. انظر لآخر الخطوات في [ ال staking في البروتوكول](/indexing#stake-in-the-protocol) للحصول على إرشادات حول تعيين بارامترات التفويض. +The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. -- **queryFeeCut** هي النسبة المئوية لخصومات رسوم الاستعلام المتراكمة على subgraph والتي سيتم توزيعها على المفهرس. إذا تم التعيين على 95٪ ، فسيحصل المفهرس على 95٪ من مجموعة خصم رسوم الاستعلام عند المطالبة بالمخصصة و 5٪ إلى المفوضين. +- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. -- **indexingRewardCut** هي النسبة المئوية لمكافآت الفهرسة المتراكمة على subgraph والتي سيتم توزيعها على المفهرس. إذا تم تعيين 95٪ ، فسيحصل المفهرس على 95٪ من مجموع مكافآت الفهرسة عند إغلاق المخصصة وسيقوم المفوضون بتقاسم الـ 5٪ الأخرى. +- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. -### كيف يعرف المفهرسون أي subgraphs عليهم فهرستها؟ +### How do indexers know which subgraphs to index? -من خلال تطبيق تقنيات متقدمة لاتخاذ قرارات فهرسة ال subgraph ، وسنناقش العديد من المقاييس الرئيسية المستخدمة لتقييم ال subgraphs في الشبكة: +Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: -- **إشارة التنسيق Curation signal** ـ تعد نسبة إشارة تنسيق الشبكة على subgraph معين مؤشرا جيدا على الاهتمام بهذا ال subgraph، خاصة أثناء المراحل الأولى عندما يزداد حجم الاستعلام. +- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. -- **مجموعة رسوم الاستعلام Query fees collected** ـ تعد البيانات التاريخية لحجم مجموعة رسوم الاستعلام ل subgraph معين مؤشرا جيدا للطلب المستقبلي. +- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. -- **Amount staked** ـ مراقبة سلوك المفهرسين أو النظر إلى نسب إجمالي الحصة المخصصة ل subgraphs معين تسمح للمفهرس بمراقبة جانب العرض لاستعلامات الsubgraph لتحديد ال subgraphs الموثوقة أو subgraphs التي قد تظهر الحاجة إلى مزيد من العرض. +- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. -- **ال Subgraphs التي بدون مكافآت فهرسة** ـ بعض الsubgraphs لا تنتج مكافآت الفهرسة بشكل أساسي لأنها تستخدم ميزات غير مدعومة مثل IPFS أو لأنها تستعلم عن شبكة أخرى خارج الشبكة الرئيسية mainnet. سترى رسالة على ال subgraph إذا لا تنتج مكافآت فهرسة. +- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. -### ما هي المتطلبات للهاردوير؟ +### What are the hardware requirements? -- **صغيرة**ـ يكفي لبدء فهرسة العديد من ال subgraphs، من المحتمل أن تحتاج إلى توسيع. -- ** قياسية ** - هو الإعداد الافتراضي ، ويتم استخدامه في مثال بيانات نشر k8s / terraform. -- **متوسطة** - مؤشر انتاج ​​يدعم 100 subgraphs و 200-500 طلب في الثانية. -- **كبيرة** - مُعدة لفهرسة جميع ال subgraphs المستخدمة حاليا وأيضا لخدمة طلبات حركة مرور البيانات ذات الصلة. +- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. +- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. -| Setup | (CPUs) | (memory in GB) | (disk in TBs) | (CPUs) | (memory in GB) | -| ----- | :----: | :------------: | :-----------: | :----: | :------------: | -| صغير | 4 | 8 | 1 | 4 | 16 | -| قياسي | 8 | 30 | 1 | 12 | 48 | -| متوسط | 16 | 64 | 2 | 32 | 64 | -| كبير | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | -### ما هي بعض احتياطات الأمان الأساسية التي يجب على المفهرس اتخاذها؟ +### What are some basic security precautions an indexer should take? -- **محفظة المشغلOperator wallet**- يعد إعداد محفظة المشغل إجراء احترازيًا مهمًا لأنه يسمح للمفهرس بالحفاظ على الفصل بين مفاتيحه التي تتحكم في ال stake وتلك التي تتحكم في العمليات اليومية. انظر [الحصة Stake في البروتوكول](/indexing#stake-in-the-protocol) للحصول على تعليمات. +- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. -- **الجدار الناريFirewall**- فقط خدمة المفهرس تحتاج إلى كشفها للعامة ويجب تأمين منافذ الإدارة والوصول إلى قاعدة البيانات: the Graph Node JSON-RPC endpoint (المنفذ الافتراضي: 8030) ، API endpoint لإدارة المفهرس (المنفذ الافتراضي: 18000) ، ويجب عدم كشف نقطة نهاية قاعدة بيانات Postgres (المنفذ الافتراضي: 5432). +- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. -## البنية الأساسية +## Infrastructure -في البنية الأساسية للمفهرس ، توجد فيها Graph Node والتي تراقب Ethereum وتستخرج وتحمل البيانات لكل تعريف subgraph وتقدمها باعتبارها [GraphQL API](/about/introduction#how-the-graph-works). يجب توصيل Graph Node ب EVM node endpoints و IPFS node للحصول على البيانات و قاعدة بيانات PostgreSQL ومكونات المفهرس indexer components التي تسهل تفاعلها مع الشبكة. +At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. -- **قاعدة بيانات PostgreSQL**-هو المخزن الرئيسي لGraph Node ، وفيه يتم تخزين بيانات ال subgraph. خدمة المفهرس والوكيل تستخدم أيضًا قاعدة البيانات لتخزين بيانات قناة الحالة ونماذج التكلفة وقواعد الفهرسة. +- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. -- ** Ethereum endpoint ** - هي نقطة نهاية تعرض Ethereum JSON-RPC API. قد يأخذ ذلك نموذج عميل Ethereum واحدا أو قد يكون ذو إعداد أكثر تعقيدا والذي يقوم بتحميل أرصدة عبر عدة نماذج. من المهم أن تدرك أن بعض ال subgraphs تتطلب قدرات معينة لعميل Ethereum مثل الأرشفة وتتبع API. +- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. -- **(الإصدار أقل من 5) IPFS node** بيانات ال Subgraph تخزن على شبكة IPFS. يمكن لGraph Node بشكل أساسي الوصول إلى IPFS node أثناء نشر الsubgraph لجلب الsubgraph manifest وجميع الملفات المرتبطة. لا يحتاج مفهرسو الشبكة إلى استضافة IPFS node الخاصة بهم ، حيث يتم استضافة IPFS node للشبكة على https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. -- **خدمة المفهرس Indexer service**- يتعامل مع جميع الاتصالات الخارجية المطلوبة مع الشبكة. ويشارك نماذج التكلفة وحالات الفهرسة ، ويمرر طلبات الاستعلام من البوابات gateways إلى Graph Node ، ويدير مدفوعات الاستعلام عبر قنوات الحالة مع البوابة. +- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent**- يسهل تفاعلات المفهرسين على السلسلة بما في ذلك التسجيل في الشبكة ، وإدارة عمليات نشر الsubgraph إلى Graph Node/s الخاصة بها ، وإدارة المخصصات. سيرفر مقاييس Prometheus - مكونات ال Graph Node والمفهرس تسجل قياساتها على سيرفر المقاييس. +- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. -ملاحظة: لدعم القياس السريع ، يستحسن فصل الاستعلام والفهرسة بين مجموعات مختلفة من العقد Nodes: عقد الاستعلام وعقد الفهرس. +Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. -### نظرة عامة على المنافذ Ports +### Ports overview -> **مهم** كن حذرًا بشأن كشف المنافذ للعامة - **منافذ الإدارة** يجب أن تبقى مغلقة. يتضمن ذلك Graph Node JSON-RPC ونقاط نهاية endpoints إدارة المفهرس التالية. +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | -#### خدمة المفهرس +#### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -170,25 +170,25 @@ query indexerAllocations { | ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | | 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### قم بإعداد البنية الأساسية للسيرفر باستخدام Terraform على Google Cloud +### Setup server infrastructure using Terraform on Google Cloud -#### متطلبات التثبيت +#### Install prerequisites - Google Cloud SDK - Kubectl command line tool - Terraform -#### أنشئ مشروع Google Cloud +#### Create a Google Cloud Project -- استنسخ أو انتقل إلى مستودع المفهرس. +- Clone or navigate to the indexer repository. -- انتقل إلى الدليل ./terraform ، حيث يجب تنفيذ جميع الأوامر. +- Navigate to the ./terraform directory, this is where all commands should be executed. ```sh cd terraform ``` -- قم بالتوثيق بواسطة Google Cloud وأنشئ مشروع جديد. +- Authenticate with Google Cloud and create a new project. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- استخدم [صفحة الفوترة] في Google Cloud Console لتمكين الفوترة للمشروع الجديد. +- Use the Google Cloud Console's billing page to enable billing for the new project. -- قم بإنشاء Google Cloud configuration. +- Create a Google Cloud configuration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- قم بتفعيل Google Cloud APIs المطلوبة. +- Enable required Google Cloud APIs. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- قم بإنشاء حساب الخدمة service account. +- Create a service account. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- قم بتفعيل ال peering بين قاعدة البيانات ومجموعة Kubernetes التي سيتم إنشاؤها في الخطوة التالية. +- Enable peering between database and Kubernetes cluster that will be created in the next step. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- قم بإنشاء الحد الأدنى من ملف التهيئة ل terraform (التحديث حسب الحاجة). +- Create minimal terraform configuration file (update as needed). ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### استخدم Terraform لإنشاء البنية الأساسية +#### Use Terraform to create infrastructure -قبل تشغيل أي من الأوامر ، اقرأ [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) وأنشئ ملف `terraform.tfvars` في هذا الدليل (أو قم بتعديل الدليل الذي أنشأناه في الخطوة الأخيرة). أدخل الإعداد في `terraform.tfvars` لكل متغير تريد أن يتجاهل الافتراضي ، أو تريد تعيين قيمة إليه. +Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. -- قم بتشغيل الأوامر التالية لإنشاء البنية الأساسية. +- Run the following commands to create the infrastructure. ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -انشر جميع المصادر باستخدام `kubectl application -k $dir`. +Download credentials for the new cluster into `~/.kube/config` and set it as your default context. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### إنشاء مكونات ال Kubernetes للمفهرس +#### Creating the Kubernetes components for the indexer -- انسخ الدليل `k8s / Overays` إلى دليل جديد `$dir,` واضبط إدخال `القواعد` في `$dir/ kustomization.yaml` بحيث يشير إلى الدليل `k8s / base`. +- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. -- اقرأ جميع الملفات الموجودة في `$dir` واضبط القيم كما هو موضح في التعليقات. +- Read through all the files in `$dir` and adjust any values as indicated in the comments. Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[ Graph Node ](https://github.com/graphprotocol/graph-node) هو تطبيق مفتوح المصدر Rust ومصدره Ethereum blockchain لتحديث البيانات والذي يمكن الاستعلام عنها عبر GraphQL endpoint. يستخدم المطورون ال subgraphs لتحديد مخططهم ، ويستخدمون مجموعة من الرسوم لتحويل البيانات التي يتم الحصول عليها من blockchain و the Graph Node والتي تقوم بمعالجة مزامنة السلسلة بأكملها ، ومراقبة الكتل الجديدة ، وتقديمها عبر GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. -#### ابدأ من المصدر +#### Getting started from source -#### متطلبات التثبيت +#### Install prerequisites - **Rust** @@ -307,7 +307,7 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **متطلبات إضافية لمستخدمي Ubuntu **- لتشغيل Graph Node على Ubuntu ، قد تكون هناك حاجة إلى بعض الحزم الإضافية. +- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config @@ -315,7 +315,7 @@ sudo apt-get install -y clang libpg-dev libssl-dev pkg-config #### Setup -1. شغل سيرفر قاعدة بيانات PostgreSQL +1. Start a PostgreSQL database server ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. استنسخ [ Graph Node ](https://github.com/graphprotocol/graph-node) وابني المصدر عن طريق تشغيل `cargo build` +2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` -3. ابدأ Graph Node: +3. Now that all the dependencies are setup, start the Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### الشروع في استخدام Docker +#### Getting started using Docker -#### المتطلبات الأساسية +#### Prerequisites -- **Ethereum node** - افتراضيا،إعداد ال docker سيستخدم mainnet [http://host.docker.internal:8545](http://host.docker.internal:8545) للاتصال بEthereum node على جهازك المضيف. يمكنك استبدال اسم الشبكة وعنوان url بتحديث `docker-compose.yaml`. +- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. #### Setup -1. انسخ Graph Node وانتقل إلى دليل Docker: +1. Clone Graph Node and navigate to the Docker directory: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. لمستخدمي نظام Linux فقط - استخدم عنوان IP للمضيف بدلاً من `host.docker.internal` في `docker-compose.yaml` باستخدام البرنامج النصي المضمن: +2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: ```sh ./setup.sh ``` -3. ابدأ Graph Node محلية والتي ستتصل ب Ethereum endpoint الخاصة بك: +3. Start a local Graph Node that will connect to your Ethereum endpoint: ```sh docker-compose up ``` -### مكونات المفهرس Indexer components +### Indexer components -المشاركة الناجحة في الشبكة تتطلب مراقبة وتفاعلا مستمرين تقريبا ، لذلك قمنا ببناء مجموعة من تطبيقات Typescript لتسهيل مشاركة شبكة المفهرسين. هناك ثلاثة مكونات للمفهرس: +To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: -- **Indexer agent** - يراقب الشبكة والبنية الأساسية الخاصة بالمفهرس ويدير عمليات نشر subgraph والتي تتم فهرستها وتوزيعها على السلسلة ومقدار ما يتم تخصيصه لكل منها. +- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. -- **Indexer service** - المكون الوحيد الذي يجب الكشف عنه للعامة، حيث تمر الخدمة على استعلامات subgraph إلى graph node ، وتدير قنوات الحالة state channels لمدفوعات الاستعلام ، وتشارك معلومات مهمة بشأن اتخاذ القرار للعملاء مثل البوابات gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. -- ** فهرس CLI ** - واجهة سطر الأوامر لإدارة وكيل المفهرس indexer agent. يسمح للمفهرسين بإدارة نماذج التكلفة وقواعد الفهرسة. +- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. -#### ابدأ +#### Getting started -يجب أن يتم وضع وكيل المفهرس indexer agent وخدمة المفهرس indexer service في نفس الموقع مع البنية الأساسية ل Graph Node الخاصة بك. هناك العديد من الطرق لإعداد بيئات التشغيل الافتراضية لمكونات المفهرس ؛ سنشرح هنا كيفية تشغيلها على baremetal باستخدام حزم NPM أو المصدر ، أو عبر kubernetes و docker على Google Cloud Kubernetes Engine. إذا لم تُترجم أمثلة الإعداد هذه بشكل جيد إلى بنيتك الأساسية ، فمن المحتمل أن يكون هناك دليل مجتمعي للرجوع إليه ، تفضل بزيارة [ Discord ](https://thegraph.com/discord)! تذكر أن [ تشارك في البروتوكول ](/indexing#stake-in-the-protocol) قبل البدء في تشغيل مكونات المفهرس! +The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! -#### من حزم NPM +#### From NPM packages ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### من المصدر +#### From source ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### استخدام docker +#### Using docker -- اسحب الصور من السجل +- Pull images from the registry ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -**ملاحظة**: بعد بدء ال containers ، يجب أن تكون خدمة المفهرس متاحة على [http: // localhost: 7600 ](http://localhost:7600) ويجب على وكيل المفهرس عرض API إدارة المفهرس على [ http: // localhost: 18000 / ](http://localhost:18000/). +Or build images locally from source ```sh # Indexer service @@ -442,22 +442,22 @@ docker build \ -t indexer-agent:latest \ ``` -- قم بتشغيل المكونات +- Run the components ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -انظر قسم [ إعداد البنية الأساسية للسيرفر باستخدام Terraform على Google Cloud ](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) +**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). -#### استخدام K8s و Terraform +#### Using K8s and Terraform -The Indexer CLI هو مكون إضافي لـ [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) ويمكن الوصول إليه في النهاية الطرفية عند `graph indexer`. +See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section -#### الاستخدام +#### Usage -> **ملاحظة**: جميع متغيرات الإعدادات الخاصة بوقت التشغيل يمكن تطبيقها إما كبارامترات للأمر عند بدء التشغيل أو باستخدام متغيرات البيئة بالتنسيق `COMPONENT_NAME_VARIABLE_NAME` (على سبيل المثال `INDEXER_AGENT_ETHEREUM`). +> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). #### Indexer agent @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### خدمة المفهرس Indexer service +#### Indexer service ```sh SERVER_HOST=localhost \ @@ -522,35 +522,35 @@ graph indexer connect http://localhost:18000 graph indexer status ``` -#### إدارة المفهرس باستخدام مفهرس CLI +#### Indexer management using indexer CLI -يحتاج وكيل المفهرس indexer agent إلى مدخلات من المفهرس من أجل التفاعل بشكل مستقل مع الشبكة نيابة عن المفهرس. **قواعد الفهرسة** تقوم بتحديد سلوك وكيل المفهرس indexer agent. باستخدام **قواعد الفهرسة** يمكن للمفهرس تطبيق إستراتيجيته المحددة لانتقاء ال subgraphs للفهرسة وعرض الاستعلامات الخاصة بها. تتم إدارة القواعد عبر GraphQL API التي يقدمها الوكيل وتُعرف باسم API إدارة المفهرس. الأداة المقترحة للتفاعل مع ** API إدارة المفهرس ** هي ** Indexer CLI ** ، وهو امتداد لـ **Graph CLI**. +The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. -#### الاستخدام +#### Usage -يتصل ** Indexer CLI ** بوكيل المفهرس indexer agent ، عادةً من خلال port-forwarding ، لذلك لا يلزم تشغيل CLI على نفس السيرفر أو المجموعة. ولمساعدتك على البدء سيتم وصف CLI بإيجاز هنا. +The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. -- `graph indexer connect ` - قم بالاتصال بAPI إدارة المفهرس. عادةً ما يتم فتح الاتصال بالسيرفر عبر إعادة توجيه المنفذ port forwarding ، لذلك يمكن تشغيل CLI بسهولة عن بُعد. (مثل: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - احصل على قاعدة أو أكثر من قواعد الفهرسة باستخدام `all` مثل `` للحصول على جميع القواعد, أو `global` للحصول على الافتراضات العالمية. يمكن استخدام argument إضافية `--merged` لتحديد قواعد النشر المحددة المدمجة مع القاعدة العامة. هذه هي الطريقة التي يتم تطبيقها في indexer agent. +- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. -- `graph indexer rules set [options] ...` - قم بتعيين قاعدة أو أكثر من قواعد الفهرسة. +- `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - ابدأ فهرسة ال subgraph إذا كان متاحًا وقم بتعيين `decisionBasis` إلى `always`, لذلك دائما سيختار وكيل المفهرس فهرسته. إذا تم تعيين القاعدة العامة على دائما always ، فسيتم فهرسة جميع ال subgraphs المتاحة على الشبكة. +- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. -- `graph indexer rules stop [options] ` - توقف عن فهرسة النشر deployment وقم بتعيين ملف `decisionBasis` إلىnever أبدًا ، لذلك سيتم تخطي هذا النشر عند اتخاذ قرار بشأن عمليات النشر للفهرسة. +- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. -- `graph indexer rules maybe [options] ` — ضع `thedecisionBasis` للنشر deployment ل `rules`, بحيث يستخدم وكيل المفهرس قواعد الفهرسة ليقرر ما إذا كان سيفهرس هذا النشر أم لا. +- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. -جميع الأوامر التي تعرض القواعد في الخرج output يمكنها الاختيار بين تنسيقات الإخراج المدعومة (`table`, `yaml`, `json`) باستخدام `-output` argument. +All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. -#### قواعد الفهرسة +#### Indexing rules -يمكن تطبيق قواعد الفهرسة إما كإعدادات افتراضية عامة أو لعمليات نشر subgraph محددة باستخدام معرفاتها IDs. يعد الحقلان `deployment` و `decisionBasis` إلزاميًا ، بينما تعد جميع الحقول الأخرى اختيارية. عندما تحتوي قاعدة الفهرسة على `rules` باعتبارها `decisionBasis` ، فإن وكيل المفهرس indexer agent سيقارن قيم العتبة غير الفارغة في تلك القاعدة بالقيم التي تم جلبها من الشبكة. إذا كان نشر ال subgraph يحتوي على قيم أعلى (أو أقل) من أي من العتبات ، فسيتم اختياره للفهرسة. +Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -على سبيل المثال ، إذا كانت القاعدة العامة لديها`minStake` من ** 5 ** (GRT) ، فأي نشر subgraph به أكثر من 5 (GRT) من الحصة المخصصة ستتم فهرستها. قواعد العتبة تتضمن `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. -نموذج البيانات Data model: +Data model: ```graphql type IndexingRule { @@ -573,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### نماذج التكلفة Cost models +#### Cost models -نماذج التكلفة تقوم بالتسعير بشكل ديناميكي للاستعلامات بناءً على خصائص السوق والاستعلام. خدمة المفهرس Indexer Service تشارك نموذج التكلفة مع البوابات gateways لكل subgraph للذين يريدون الرد على الاستفسارات. هذه البوابات تستخدم نموذج التكلفة لاتخاذ قرارات اختيار المفهرس لكل استعلام وللتفاوض بشأن الدفع مع المفهرسين المختارين. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. #### Agora -توفر لغة Agora تنسيقا مرنا للإعلان عن نماذج التكلفة للاستعلامات. نموذج سعر Agora هو سلسلة من العبارات التي يتم تنفيذها بالترتيب لكل استعلام عالي المستوى في GraphQL. بالنسبة إلى كل استعلام عالي المستوى top-level ، فإن العبارة الأولى التي تتطابق معه تحدد سعر هذا الاستعلام. +The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. -تتكون العبارة من المسند predicate ، والذي يستخدم لمطابقة استعلامات GraphQL وتعبير التكلفة والتي عند تقييم النواتج تكون التكلفة ب GRT عشري. قيم الاستعلام الموجودة في ال argument ،قد يتم تسجيلها في المسند predicate واستخدامها في التعبير expression. يمكن أيضًا تعيين Globals وتعويضه في التعبير expression. +A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. -مثال لتكلفة الاستعلام باستخدام النموذج أعلاه: +Example cost model: ``` # This statement captures the skip value, @@ -596,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -مثال على نموذج التكلفة: +Example query costing using the above model: -| الاستعلام | السعر | +| Query | Price | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### تطبيق نموذج التكلفة +#### Applying the cost model -يتم تطبيق نماذج التكلفة عبر Indexer CLI ، والذي يقوم بتمريرها إلى وكيل المفهرس عبر API إدارة المفهرس للتخزين في قاعدة البيانات. بعد ذلك ستقوم خدمة المفهرس Indexer Service باستلامها وتقديم نماذج التكلفة للبوابات كلما طلبوا ذلك. +Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## التفاعل مع الشبكة +## Interacting with the network ### Stake in the protocol -الخطوات الأولى للمشاركة في الشبكة كمفهرس هي الموافقة على البروتوكول وصناديق الأسهم، و (اختياريا) إعداد عنوان المشغل لتفاعلات البروتوكول اليومية. _ ** ملاحظة **: لأغراض الإرشادات ، سيتم استخدام Remix للتفاعل مع العقد ، ولكن لا تتردد في استخدام الأداة التي تختارها (\[OneClickDapp \](https: // oneclickdapp.com/) و [ ABItopic ](https://abitopic.io/) و [ MyCrypto ](https://www.mycrypto.com/account) وهذه بعض الأدوات المعروفة)._ +The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ -بعد أن تم إنشاؤه بواسطة المفهرس ، يمر التخصيص السليم عبر أربع حالات. +Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. -#### اعتماد التوكن tokens +#### Approve tokens -1. افتح [ تطبيق Remix ](https://remix.ethereum.org/) على المتصفح +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. في `File Explorer` أنشئ ملفا باسم ** GraphToken.abi ** باستخدام \[token ABI \](https://raw.githubusercontent.com/graphprotocol /contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. مع تحديد `GraphToken.abi` وفتحه في المحرر ، قم بالتبديل إلى Deploy و `Run Transactions` في واجهة Remix. +3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. -4. تحت البيئة environment ، حدد `Injected Web3` وتحت `Account` حدد عنوان المفهرس. +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. قم بتعيين عنوان GraphToken - الصق العنوان (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) بجوار `At Address` وانقر على الزر `At address` لتطبيق ذلك. +5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. -6. استدعي دالة `approve(spender, amount)` للموافقة على عقد Staking. املأ `spender` بعنوان عقد Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) واملأ `amount` بالتوكن المراد عمل staking لها (في wei). +6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). #### Stake tokens -1. افتح [ تطبيق Remix ](https://remix.ethereum.org/) على المتصفح +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. في `File Explorer` أنشئ ملفا باسم ** Staking.abi ** باستخدام Staking ABI. +2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. -3. مع تحديد `Staking.abi` وفتحه في المحرر ، قم بالتبديل إلى قسم `Deploy` و `Run Transactions` في واجهة Remix. +3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. -4. تحت البيئة environment ، حدد `Injected Web3` وتحت `Account` حدد عنوان المفهرس. +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. عيّن عنوان عقد Staking - الصق عنوان عقد Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) بجوار `At address` وانقر على الزر `At address` لتطبيق ذلك. +5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. -6. استدعي `stake()` لوضع GRT في البروتوكول. +6. Call `stake()` to stake GRT in the protocol. -7. (اختياري) يجوز للمفهرسين الموافقة على عنوان آخر ليكون المشغل للبنية الأساسية للمفهرس من أجل فصل المفاتيح keys التي تتحكم بالأموال عن تلك التي تقوم بإجراءات يومية مثل التخصيص على subgraphs وتقديم الاستعلامات (مدفوعة). لتعيين المشغل استدعي `setOperator()` بعنوان المشغل. +7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (اختياري) من أجل التحكم في توزيع المكافآت وجذب المفوضين بشكل استراتيجي ، يمكن للمفهرسين تحديث بارامترات التفويض الخاصة بهم عن طريق تحديث indexingRewardCut (أجزاء لكل مليون) ، و queryFeeCut (أجزاء لكل مليون) ، و cooldownBlocks (عدد الكتل). للقيام بذلك ، استدعي `setDelegationParameters()`. المثال التالي يعيّن queryFeeCut لتوزيع 95٪ من خصومات الاستعلام query rebates للمفهرس و 5٪ للمفوضين ، اضبط indexingRewardCut لتوزيع 60٪ من مكافآت الفهرسة للمفهرس و 40٪ للمفوضين ، وقم بتعيين فترة `thecooldownBlocks` إلى 500 كتلة. +8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) ``` -### عمر التخصيص allocation +### The life of an allocation After being created by an indexer a healthy allocation goes through four states. -- ** نشط ** - بمجرد إنشاء تخصيص على السلسلة (\[allocateFrom()\](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/ Staking.sol # L873)) فهذا يعتبر ** نشطا **. يتم تخصيص جزء من حصة المفهرس الخاصة و / أو الحصة المفوضة لنشر subgraph ، مما يسمح لهم بالمطالبة بمكافآت الفهرسة وتقديم الاستعلامات لنشر ال subgraph. يدير وكيل المفهرس indexer agent إنشاء عمليات التخصيص بناء على قواعد المفهرس. +- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. -- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). عندما يتم إغلاق تخصيص بإثبات صالح للفهرسة (POI) ، يتم توزيع مكافآت الفهرسة الخاصة به على المفهرس والمفوضين (انظر "كيف يتم توزيع المكافآت؟" أدناه لمعرفة المزيد). +- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). -- ** مكتمل** - بمجرد إغلاق التخصيص ، توجد فترة اعتراض يتم بعدها اعتبار التخصيص ** مكتملا** ويكون خصومات رسوم الاستعلام متاحة للمطالبة بها (claim()). وكيل المفهرس indexer agent يراقب الشبكة لاكتشاف التخصيصات ** المكتملة ** ويطالب بها إذا كانت أعلى من العتبة (واختياري) ، ** عتبة-مطالبة-التخصيص **. +- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. -- ** مُطالب به ** - هي الحالة النهائية للتخصيص ؛ وهي التي سلكت مجراها كمخصصة نشطة ، وتم توزيع جميع المكافآت المؤهلة وتمت المطالبة بخصومات رسوم الاستعلام. +- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. From 5c3fc9862938896e14227da91d00786008a6cdf9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:35 -0500 Subject: [PATCH 095/241] New translations global.json (Vietnamese) --- pages/vi/global.json | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/pages/vi/global.json b/pages/vi/global.json index 0967ef424bce..72b4fc820dba 100644 --- a/pages/vi/global.json +++ b/pages/vi/global.json @@ -1 +1,17 @@ -{} +{ + "language": "Language", + "aboutTheGraph": "About The Graph", + "developer": "Nhà phát triển", + "supportedNetworks": "Mạng lưới được hỗ trợ", + "collapse": "Collapse", + "expand": "Expand", + "previous": "Previous", + "next": "Next", + "editPage": "Edit page", + "pageSections": "Page Sections", + "linkToThisSection": "Link to this section", + "technicalLevelRequired": "Technical Level Required", + "notFoundTitle": "Oops! This page was lost in space...", + "notFoundSubtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", + "goHome": "Go Home" +} From e0aea28692cf5498f6ba732db4b3be78ed3284f9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:36 -0500 Subject: [PATCH 096/241] New translations index.json (Spanish) --- pages/es/index.json | 78 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/pages/es/index.json b/pages/es/index.json index 0967ef424bce..0c98cc47940c 100644 --- a/pages/es/index.json +++ b/pages/es/index.json @@ -1 +1,77 @@ -{} +{ + "title": "Get Started", + "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", + "shortcuts": { + "aboutTheGraph": { + "title": "About The Graph", + "description": "Aprende más sobre The Graph" + }, + "quickStart": { + "title": "Quick Start", + "description": "Jump in and start with The Graph" + }, + "developerFaqs": { + "title": "Developer FAQs", + "description": "Frequently asked questions" + }, + "queryFromAnApplication": { + "title": "Query from an Application", + "description": "Learn to query from an application" + }, + "createASubgraph": { + "title": "Create a Subgraph", + "description": "Use Studio to create subgraphs" + }, + "migrateFromHostedService": { + "title": "Migrate from Hosted Service", + "description": "Migrating subgraphs to The Graph Network" + } + }, + "networkRoles": { + "title": "Network Roles", + "description": "Learn about The Graph’s network roles.", + "roles": { + "developer": { + "title": "Desarrollador", + "description": "Create a subgraph or use existing subgraphs in a dapp" + }, + "indexer": { + "title": "indexación", + "description": "Operador de nodos encargado de indexar los datos y proveer consultas" + }, + "curator": { + "title": "curación", + "description": "Organiza los datos mediante la señalización de subgrafos" + }, + "delegator": { + "title": "delegación", + "description": "Se encarga de proteger la red al delegar sus GRT a los Indexadores" + } + } + }, + "readMore": "Read more", + "products": { + "title": "Productos", + "products": { + "subgraphStudio": { + "title": "Subgraph Studio", + "description": "Create, manage and publish subgraphs and API keys" + }, + "graphExplorer": { + "title": "Graph Explorer", + "description": "Explora los distintos subgrafos e interactua con el protocolo" + }, + "hostedService": { + "title": "Hosted Service", + "description": "Crea y explora subgrafos en el servicio alojado" + } + } + }, + "supportedNetworks": { + "title": "Redes admitidas", + "description": "The Graph supports the following networks on The Graph Network and the Hosted Service.", + "graphNetworkAndHostedService": "The Graph Network & Hosted Service", + "hostedService": "Hosted Service", + "betaWarning": "Network is in beta. Use with caution." + } +} From dc722bd661c8a9778dbf2684e3705e824048f99a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:37 -0500 Subject: [PATCH 097/241] New translations index.json (Arabic) --- pages/ar/index.json | 78 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/pages/ar/index.json b/pages/ar/index.json index 0967ef424bce..6f92f6870a4e 100644 --- a/pages/ar/index.json +++ b/pages/ar/index.json @@ -1 +1,77 @@ -{} +{ + "title": "Get Started", + "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", + "shortcuts": { + "aboutTheGraph": { + "title": "About The Graph", + "description": "تعرف أكثر حول The Graph" + }, + "quickStart": { + "title": "Quick Start", + "description": "Jump in and start with The Graph" + }, + "developerFaqs": { + "title": "Developer FAQs", + "description": "Frequently asked questions" + }, + "queryFromAnApplication": { + "title": "Query from an Application", + "description": "Learn to query from an application" + }, + "createASubgraph": { + "title": "Create a Subgraph", + "description": "Use Studio to create subgraphs" + }, + "migrateFromHostedService": { + "title": "Migrate from Hosted Service", + "description": "Migrating subgraphs to The Graph Network" + } + }, + "networkRoles": { + "title": "Network Roles", + "description": "Learn about The Graph’s network roles.", + "roles": { + "developer": { + "title": "المطور", + "description": "Create a subgraph or use existing subgraphs in a dapp" + }, + "indexer": { + "title": "فهرسة (indexing)", + "description": "تشغيل عقدة node وذلك لفهرسة البيانات وتقديم الاستعلامات" + }, + "curator": { + "title": "(التنسيق) curating", + "description": "تنظيم البيانات بواسطة الإشارة إلى subgraphs" + }, + "delegator": { + "title": "تفويض", + "description": "تأمين الشبكة عن طريق تفويض GRT للمفهرسين" + } + } + }, + "readMore": "Read more", + "products": { + "title": "المنتجات", + "products": { + "subgraphStudio": { + "title": "Subgraph Studio", + "description": "Create, manage and publish subgraphs and API keys" + }, + "graphExplorer": { + "title": "Graph Explorer", + "description": "Explore subgraphs and interact with the protocol" + }, + "hostedService": { + "title": "الخدمة المستضافة", + "description": "Create and explore subgraphs on the Hosted Service" + } + } + }, + "supportedNetworks": { + "title": "الشبكات المدعومة", + "description": "The Graph supports the following networks on The Graph Network and the Hosted Service.", + "graphNetworkAndHostedService": "The Graph Network & Hosted Service", + "hostedService": "الخدمة المستضافة", + "betaWarning": "Network is in beta. Use with caution." + } +} From 3fcd76f0f9abb55b8af9f45085fe05d2f1a080f8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:38 -0500 Subject: [PATCH 098/241] New translations index.json (Japanese) --- pages/ja/index.json | 78 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/pages/ja/index.json b/pages/ja/index.json index 0967ef424bce..39c600880dbf 100644 --- a/pages/ja/index.json +++ b/pages/ja/index.json @@ -1 +1,77 @@ -{} +{ + "title": "Get Started", + "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", + "shortcuts": { + "aboutTheGraph": { + "title": "About The Graph", + "description": "The Graphについて学ぶ" + }, + "quickStart": { + "title": "Quick Start", + "description": "Jump in and start with The Graph" + }, + "developerFaqs": { + "title": "Developer FAQs", + "description": "Frequently asked questions" + }, + "queryFromAnApplication": { + "title": "Query from an Application", + "description": "Learn to query from an application" + }, + "createASubgraph": { + "title": "Create a Subgraph", + "description": "Use Studio to create subgraphs" + }, + "migrateFromHostedService": { + "title": "Migrate from Hosted Service", + "description": "Migrating subgraphs to The Graph Network" + } + }, + "networkRoles": { + "title": "Network Roles", + "description": "Learn about The Graph’s network roles.", + "roles": { + "developer": { + "title": "ディベロッパー", + "description": "Create a subgraph or use existing subgraphs in a dapp" + }, + "indexer": { + "title": "インデクシング", + "description": "ノードを稼働してデータインデックスを作成し、クエリサービスを提供する" + }, + "curator": { + "title": "キューレーティング", + "description": "サブグラフのシグナリングによるデータの整理" + }, + "delegator": { + "title": "デリゲーティング", + "description": "保有GRTをインデクサーに委任することでネットワークの安全性を確保" + } + } + }, + "readMore": "Read more", + "products": { + "title": "プロダクト", + "products": { + "subgraphStudio": { + "title": "Subgraph Studio", + "description": "Create, manage and publish subgraphs and API keys" + }, + "graphExplorer": { + "title": "グラフエクスプローラ", + "description": "サブグラフの探索とプロトコルとの対話" + }, + "hostedService": { + "title": "Hosted Service", + "description": "ホストサービスでのサブグラフの作成と探索" + } + } + }, + "supportedNetworks": { + "title": "Supported Networks", + "description": "The Graph supports the following networks on The Graph Network and the Hosted Service.", + "graphNetworkAndHostedService": "The Graph Network & Hosted Service", + "hostedService": "Hosted Service", + "betaWarning": "Network is in beta. Use with caution." + } +} From 65367245246c1e72d31c3d6103b40574e7c9603b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:39 -0500 Subject: [PATCH 099/241] New translations index.json (Korean) --- pages/ko/index.json | 78 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/pages/ko/index.json b/pages/ko/index.json index 0967ef424bce..ccd5906c050e 100644 --- a/pages/ko/index.json +++ b/pages/ko/index.json @@ -1 +1,77 @@ -{} +{ + "title": "Get Started", + "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", + "shortcuts": { + "aboutTheGraph": { + "title": "About The Graph", + "description": "The Graph에 대해 더 알아보기" + }, + "quickStart": { + "title": "Quick Start", + "description": "Jump in and start with The Graph" + }, + "developerFaqs": { + "title": "Developer FAQs", + "description": "Frequently asked questions" + }, + "queryFromAnApplication": { + "title": "Query from an Application", + "description": "Learn to query from an application" + }, + "createASubgraph": { + "title": "Create a Subgraph", + "description": "Use Studio to create subgraphs" + }, + "migrateFromHostedService": { + "title": "Migrate from Hosted Service", + "description": "Migrating subgraphs to The Graph Network" + } + }, + "networkRoles": { + "title": "Network Roles", + "description": "Learn about The Graph’s network roles.", + "roles": { + "developer": { + "title": "개발자", + "description": "Create a subgraph or use existing subgraphs in a dapp" + }, + "indexer": { + "title": "인덱싱(indexing)", + "description": "데이터 인덱싱 혹은 쿼리 제공을 위해 노드를 운영합니다." + }, + "curator": { + "title": "큐레이팅", + "description": "서브그래프들에 신호를 보냄으로써 데이터를 구성합니다." + }, + "delegator": { + "title": "위임하기", + "description": "인덱서들에게 GRT를 위임함으로써 네트워크를 보안에 기여합니다." + } + } + }, + "readMore": "Read more", + "products": { + "title": "제품", + "products": { + "subgraphStudio": { + "title": "Subgraph Studio", + "description": "Create, manage and publish subgraphs and API keys" + }, + "graphExplorer": { + "title": "Graph Explorer", + "description": "Explore subgraphs and interact with the protocol" + }, + "hostedService": { + "title": "Hosted Service", + "description": "Create and explore subgraphs on the Hosted Service" + } + } + }, + "supportedNetworks": { + "title": "Supported Networks", + "description": "The Graph supports the following networks on The Graph Network and the Hosted Service.", + "graphNetworkAndHostedService": "The Graph Network & Hosted Service", + "hostedService": "Hosted Service", + "betaWarning": "Network is in beta. Use with caution." + } +} From 61ccf035900a838804758099932cbb3f64ce35b3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:40 -0500 Subject: [PATCH 100/241] New translations index.json (Chinese Simplified) --- pages/zh/index.json | 78 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/pages/zh/index.json b/pages/zh/index.json index 0967ef424bce..915cf97d06a8 100644 --- a/pages/zh/index.json +++ b/pages/zh/index.json @@ -1 +1,77 @@ -{} +{ + "title": "Get Started", + "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", + "shortcuts": { + "aboutTheGraph": { + "title": "About The Graph", + "description": "了解有关The Graph的更多信息" + }, + "quickStart": { + "title": "Quick Start", + "description": "Jump in and start with The Graph" + }, + "developerFaqs": { + "title": "Developer FAQs", + "description": "Frequently asked questions" + }, + "queryFromAnApplication": { + "title": "Query from an Application", + "description": "Learn to query from an application" + }, + "createASubgraph": { + "title": "Create a Subgraph", + "description": "Use Studio to create subgraphs" + }, + "migrateFromHostedService": { + "title": "Migrate from Hosted Service", + "description": "Migrating subgraphs to The Graph Network" + } + }, + "networkRoles": { + "title": "Network Roles", + "description": "Learn about The Graph’s network roles.", + "roles": { + "developer": { + "title": "开发者", + "description": "Create a subgraph or use existing subgraphs in a dapp" + }, + "indexer": { + "title": "索引", + "description": "操作节点以索引数据并提供查询" + }, + "curator": { + "title": "策展", + "description": "通过在子图上发出信号来组织数据" + }, + "delegator": { + "title": "委托", + "description": "通过将 GRT 委托给索引人来保护网络" + } + } + }, + "readMore": "Read more", + "products": { + "title": "产品", + "products": { + "subgraphStudio": { + "title": "子图工作室", + "description": "Create, manage and publish subgraphs and API keys" + }, + "graphExplorer": { + "title": "Graph 浏览器", + "description": "探索子图并与协议互动" + }, + "hostedService": { + "title": "托管服务", + "description": "在托管服务上创建和探索子图" + } + } + }, + "supportedNetworks": { + "title": "支持的网络", + "description": "The Graph supports the following networks on The Graph Network and the Hosted Service.", + "graphNetworkAndHostedService": "The Graph Network & Hosted Service", + "hostedService": "托管服务", + "betaWarning": "Network is in beta. Use with caution." + } +} From 26281bcfe59b423e222f490032c18948fc7d25bc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:41 -0500 Subject: [PATCH 101/241] New translations indexing.mdx (Japanese) --- pages/ja/indexing.mdx | 373 +++++++++++++++++++++--------------------- 1 file changed, 187 insertions(+), 186 deletions(-) diff --git a/pages/ja/indexing.mdx b/pages/ja/indexing.mdx index e02be5538cbc..ac9eab223e4f 100644 --- a/pages/ja/indexing.mdx +++ b/pages/ja/indexing.mdx @@ -4,51 +4,51 @@ title: インデクシング import { Difficulty } from '@/components' -インデクサは、グラフネットワークのノードオペレータであり、グラフトークン(GRT)を賭けて、インデックス作成や問い合わせ処理のサービスを提供します。 インデクサーは、そのサービスの対価として、クエリフィーやインデックス作成の報酬を得ることができます。 また、Cobbs-Douglas Rebate Function に基づいて、ネットワーク貢献者全員にその成果に応じて分配される Rebate Pool からも報酬を得ることもできます。 +Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. -プロトコルにステークされた GRT は解凍期間が設けられており、インデクサーが悪意を持ってアプリケーションに不正なデータを提供したり、不正なインデックスを作成した場合には、スラッシュされる可能性があります。 また、インデクサーはデリゲーターからステークによる委任を受けて、ネットワークに貢献することができます。 +GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. -インデクサ − は、サブグラフのキュレーション・シグナルに基づいてインデックスを作成するサブグラフを選択し、キュレーターは、どのサブグラフが高品質で優先されるべきかを示すために GRT をステークします。 消費者(アプリケーションなど)は、インデクサーが自分のサブグラフに対するクエリを処理するパラメータを設定したり、クエリフィーの設定を行うこともできます。 +Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. -## よくある質問 +## FAQ -### ネットワーク上のインデクサーになるために必要な最低ステーク量はいくらですか? +### What is the minimum stake required to be an indexer on the network? -インデクサーの最低ステーク量は、現在 100K GRT に設定されています。 +The minimum stake for an indexer is currently set to 100K GRT. -### インデクサーの収入源は何ですか? +### What are the revenue streams for an indexer? -**クエリフィー・リベート** - ネットワーク上でクエリを提供するための手数料です。 この手数料は、インデクサーとゲートウェイ間のステートチャネルを介して支払われます。 ゲートウェイからの各クエリリクエストには手数料が含まれ、対応するレスポンスにはクエリ結果の有効性の証明が含まれます。 +**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**インデキシング報酬** - プロトコル全体のインフレーションにより生成される年率 3%のインデキシング報酬は、ネットワークのサブグラフ・デプロイメントのインデキシングを行うインデクサーに分配されます。 +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. -### 報酬の分配方法は? +### How are rewards distributed? -インデキシング報酬は、年間 3%の発行量に設定されているプロトコル・インフレから得られます。 報酬は、それぞれのサブグラフにおけるすべてのキュレーション・シグナルの割合に基づいてサブグラフに分配され、そのサブグラフに割り当てられたステークに基づいてインデクサーに分配されます。 **特典を受けるためには、仲裁憲章で定められた基準を満たす有効なPOI(Proof of Indexing)で割り当てを終了する必要があります。** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -コミュニティでは、報酬を計算するための数多くのツールが作成されており、それらは[コミュニティガイドコレクション](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c)にまとめられています。 また、[Discord サーバー](https://discord.gg/vtvv7FP)の#delegators チャンネルや#indexers チャンネルでも、最新のツールリストを見ることができます。 +Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). -### POI(proof of indexing)とは何ですか? +### What is a proof of indexing (POI)? -POI は、インデクサーが割り当てられたサブグラフにインデックスを作成していることを確認するためにネットワークで使用されます。 現在のエポックの最初のブロックに対する POI は、割り当てを終了する際に提出しなければ、その割り当てはインデックス報酬の対象となりません。 あるブロックの POI は、そのブロックまでの特定のサブグラフのデプロイに対するすべてのエンティティストアのトランザクションのダイジェストです。 +POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. -### インデキシングリワードはいつ配布されますか? +### When are indexing rewards distributed? -割り当ては、それがアクティブである間、継続的に報酬を発生させます。 報酬はインデクサによって集められ、割り当てが終了するたびに分配されます。 これは、インデクサーが強制的に閉じようとしたときに手動で行うか、28 エポックの後にデリゲーターがインデクサーのために割り当てを終了することができますが、この場合は報酬がミントされません。 28 エポックは最大の割り当て期間です(現在、1 エポックは約 24 時間です) +Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). -### 保留中のインデクサーの報酬は監視できますか? +### Can pending indexer rewards be monitored? -コミュニティが作成したダッシュボードの多くは保留中の報酬の値を含んでおり、以下の手順で簡単に手動で確認することができます。 +The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. -Etherscan を使った`getRewards()`の呼び出し: +Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. [メインネット・サブグラフ](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet)にクエリして、全てのアクティブなアロケーションの ID を取得します。 +1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: ```graphql query indexerAllocations { - indexer(id: "") { + indexer(id: "") { allocations { activeForIndexer { allocations { @@ -62,57 +62,57 @@ query indexerAllocations { Use Etherscan to call `getRewards()`: -- Etherscan interface to Rewards contract に移動します。 +- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* `getRewards()`を呼び出します - - **10を拡大します。 getRewards**のドロップダウン - - 入力欄に**allocationID**を入力 - - **Query**ボタンをクリック +* To call `getRewards()`: + - Expand the **10. getRewards** dropdown. + - Enter the **allocationID** in the input. + - Click the **Query** button. -### 争議(disputes)とは何で、どこで見ることができますか? +### What are disputes and where can I view them? -インデクサークエリとアロケーションは、期間中に The Graph 上で争議することができます。 争議期間は、争議の種類によって異なります。 クエリ/裁定には7エポックスの紛争窓口があり、割り当てには56エポックスがあります。 これらの期間が経過した後は、割り当てやクエリのいずれに対しても紛争を起こすことはできません。 紛争が開始されると、Fishermenは最低10,000GRTのデポジットを要求され、このデポジットは紛争が最終的に解決されるまでロックされます。 フィッシャーマンとは、紛争を開始するネットワーク参加者のことです。 +Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. -争議は UI のインデクサーのプロフィールページの`Disputes`タブで確認できます。 +Disputes have **three** possible outcomes, so does the deposit of the Fishermen. -- 争議が却下された場合、フィッシャーマンが預かった GRT はバーンされ、争議中のインデクサーはスラッシュされません。 -- 争議が引き分けた場合、フィッシャーマンのデポジットは返還され、争議中のインデクサーはスラッシュされることはありません。 -- 争議が受け入れられた場合、フィッシャーマンがデポジットした GRT は返却され、争議中のインデクサーはスラッシュされ、フィッシャーマンはスラッシュされた GRT の 50%を獲得します。 +- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. +- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. +- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. -紛争は、UIのインデクサーのプロファイルページの`紛争`タブで確認できます。 +Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. -### クエリフィーリベートとは何ですか、またいつ配布されますか? +### What are query fee rebates and when are they distributed? -クエリフィーは、割り当てが終了するたびにゲートウェイが徴収し、サブグラフのクエリフィーリベートプールに蓄積されます。 リベートプールは、インデクサーがネットワークのために獲得したクエリフィーの量にほぼ比例してステークを割り当てるように促すためのものです。 プール内のクエリフィーのうち、特定のインデクサーに割り当てられる部分はコブス・ダグラス生産関数を用いて計算されます。 インデクサーごとの分配額は、プールへの貢献度とサブグラフでのステークの割り当ての関数となります。 +Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. -割り当てが終了し、争議期間が経過すると、リベートをインデクサーが請求できるようになります。 請求されたクエリフィーのリベートは、クエリフィーカットとデリゲーションプールの比率に基づいて、インデクサーとそのデリゲーターに分配されます。 +Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. -### クエリフィーカットとインデキシングリワードカットとは? +### What is query fee cut and indexing reward cut? -`クエリフィーカット` と`インデキシングリワードカット` の値は、インデクサーが クールダウンブロックと共に設定できるデリゲーションパラメータで、インデクサーとそのデリゲーター間の GRT の分配を制御するためのものです。 デリゲーションパラメータの設定方法については、[Staking in the Protocol](/indexing#stake-in-the-protocol)の最後のステップを参照してください。 +The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. -- **クエリフィーカット** - サブグラフに蓄積されたクエリフィーリベートのうち、インデクサーに分配される割合です。 これが 95%に設定されていると、割り当てが要求されたときに、インデクサはクエリフィー・リベート・プールの 95%を受け取り、残りの 5%はデリゲータに渡されます。 +- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. -- **インデキシング・リワードカット** - サブグラフに蓄積されたインデキシング・リワードのうち、インデクサーに分配される割合です。 これが 95%に設定されていると、割り当てが終了したときに、インデクサがインデキシング・リワードプールの 95%を受け取り、残りの 5%をデリゲータが分け合うことになります。 +- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. -### インデクサーはどのサブグラフにインデックスを付けるかをどう見分けるのですか? +### How do indexers know which subgraphs to index? -インデクサーは、サブグラフのインデキシングの決定に高度な技術を適用することで差別化を図ることができますが、一般的な考え方として、ネットワーク内のサブグラフを評価するために使用されるいくつかの主要な指標について説明します。 +Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: -- **キュレーションシグナル** - 特定のサブグラフに適用されたネットワークキュレーションシグナルの割合は、そのサブグラフへの関心を示す指標となり、特にクエリのボリュームが増加しているブートストラップ段階では有効となります。 +- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. -- **コレクティド・クエリフィー** - 特定のサブグラフに対してコレクティド・クエリフィー量の履歴データは、将来的な需要に対する指標となります。 +- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. -- **ステーク量** - 他のインデクサーの行動を監視したり、特定のサブグラフに割り当てられた総ステーク量の割合を見ることで、インデクサーはサブグラフ・クエリの供給側を監視し、ネットワークが信頼を示しているサブグラフや、より多くの供給を必要としているサブグラフを特定することができます。 +- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. -- **インデックス報酬のないサブグラフ** - 一部のサブグラフは、主に IPFS などのサポートされていない機能を使用していたり、メインネット外の別のネットワークをクエリしていたりするため、インデックス報酬を生成しません。 インデクシング・リワードを生成していないサブグラフにはメッセージが表示されます。 +- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. -### 必要なハードウェアは何ですか? +### What are the hardware requirements? -- **Small** - いくつかのサブグラフのインデックス作成を開始するのに十分ですが、おそらく拡張が必要になります -- **Standard** - デフォルトのセットアップであり、k8s/terraform の展開マニフェストの例で使用されているものです -- **Medium** - 100 個のサブグラフと 1 秒あたり 200 ~ 500 のリクエストをサポートするプロダクションインデクサー -- **Large** - 現在使用されているすべてのサブグラフのインデックスを作成し、関連するトラフィックのリクエストに対応します +- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. +- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| @@ -121,48 +121,48 @@ Use Etherscan to call `getRewards()`: | Medium | 16 | 64 | 2 | 32 | 64 | | Large | 72 | 468 | 3.5 | 48 | 184 | -### インデクサーが取るべきセキュリティ対策は? +### What are some basic security precautions an indexer should take? -- **Operator wallet** - オペレーター・ウォレットを設定することは、インデクサーがステークを管理するキーと日々のオペレーションを管理するキーを分離することができるため、重要な予防策となります。 設定方法については [Stake in Protocol](/indexing#stake-in-the-protocol)をご覧ください。 +- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. -- **Important**: ポートの公開には注意が必要です。 **管理用ポート**はロックしておくべきです。 これには、以下に示すグラフノードの JSON-RPC とインデクサ管理用のエンドポイントが含まれます。 +- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. -## インフラストラクチャ +## Infrastructure -インデクサーのインフラの中心となるのは、イーサリアムを監視し、サブグラフの定義に従ってデータを抽出・ロードし、[GraphQL API](/about/introduction#how-the-graph-works)として提供するグラフノードです。 グラフノードには、イーサリアムの EVM ノードのエンドポイントと、データを取得するための IPFS ノード、ストア用の PostgreSQL データベース、ネットワークとのやりとりを促進するインデクサーのコンポーネントが接続されている必要があります。 +At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. -- **PostgreSQLPostgreSQL データベース** - グラフノードのメインストアで、サブグラフのデータが格納されています。 また、インデクササービスとエージェントは、データベースを使用して、ステートチャネルデータ、コストモデル、およびインデクシングルールを保存します。 +- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. -- **イーサリアムエンドポイント** - Ethereum JSON-RPC API を公開するエンドポイントです。 これは単一のイーサリアムクライアントの形をとっているかもしれませんし、複数のイーサリアムクライアント間でロードバランスをとるような複雑なセットアップになっているかもしれません。 特定のサブグラフには、アーカイブモードやトレース API など、特定のイーサリアムクライアント機能が必要になることを認識しておくことが重要です。 +- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. -- **IPFS ノード(バージョン 5 未満)** - サブグラフのデプロイメタデータは IPFS ネットワーク上に保存されます。 グラフノードは、サブグラフのデプロイ時に主に IPFS ノードにアクセスし、サブグラフマニフェストと全てのリンクファイルを取得します。 ネットワーク・インデクサーは独自の IPFS ノードをホストする必要はありません。 ネットワーク用の IPFS ノードは、https://ipfs.network.thegraph.com でホストされています。 +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. -- **Indexer service** - ネットワークとの必要な外部通信を全て処理します。 コストモデルとインデキシングのステータスを共有し、ゲートウェイからのクエリ要求をグラフノードに渡し、ゲートウェイとのステートチャンネルを介してクエリの支払いを管理します。 +- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - ネットワークへの登録、グラフノードへのサブグラフのデプロイ管理、割り当ての管理など、チェーン上のインデクサーのインタラクションを容易にします。 Prometheus メトリクス・サーバー - グラフノードとインデクサー・コンポーネントは、それぞれのメトリクスをメトリクス・サーバーに記録します。 +- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. -コマンドを実行する前に、[variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf)に目を通し、このディレクトリに`terraform.tfvars` というファイルを作成します(または、前のステップで作成したものを修正します) デフォルトを上書きしたい変数や、値を設定したい変数ごとに、`terraform.tfvars`に設定を入力します。 +Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. -### ポートの概要 +### Ports overview -> **ファイアウォール** - インデクサーのサービスのみを公開し、管理ポートとデータベースへのアクセスをロックすることに特に注意を払う必要があります。 グラフノードの JSON-RPC エンドポイント(デフォルトポート:8030)、インデクサー管理 API エンドポイント(デフォルトポート:18000)、Postgres データベースエンドポイント(デフォルトポート:5432)を公開してはいけません。 +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. -#### グラフノード +#### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -170,25 +170,25 @@ Use Etherscan to call `getRewards()`: | ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | | 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Google Cloud で Terraform を使ってサーバーインフラを構築 +### Setup server infrastructure using Terraform on Google Cloud -#### インストールの前提条件 +#### Install prerequisites - Google Cloud SDK -- Kubectl コマンドラインツール +- Kubectl command line tool - Terraform -#### Google Cloud プロジェクトの作成 +#### Create a Google Cloud Project -- クローンまたはインデクサーリポジトリに移動 +- Clone or navigate to the indexer repository. -- ./terraform ディレクトリに移動し、ここですべてのコマンドを実行 +- Navigate to the ./terraform directory, this is where all commands should be executed. ```sh cd terraform ``` -- Google Cloud で認証し、新しいプロジェクトを作成 +- Authenticate with Google Cloud and create a new project. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Google Cloud Console の\[billing page\](課金ページ) を使用して、新しいプロジェクトの課金を有効にします。 +- Use the Google Cloud Console's billing page to enable billing for the new project. -- Google Cloud の設定を作成します。 +- Create a Google Cloud configuration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Google Cloud API の設定 +- Enable required Google Cloud APIs. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- サービスアカウントを作成 +- Create a service account. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- データベースと次のステップで作成する Kubernetes クラスター間のピアリングを有効化 +- Enable peering between database and Kubernetes cluster that will be created in the next step. ```sh gcloud compute addresses create google-managed-services-default \ @@ -243,12 +243,13 @@ gcloud compute addresses create google-managed-services-default \ --purpose=VPC_PEERING \ --network default \ --global \ - --description 'IP Range for peer networks.' gcloud services vpc-peerings connect \ + --description 'IP Range for peer networks.' +gcloud services vpc-peerings connect \ --network=default \ --ranges=google-managed-services-default ``` -- Terraform 設定ファイルを作成(必要に応じて更新してください) +- Create minimal terraform configuration file (update as needed). ```sh indexer= @@ -259,24 +260,24 @@ database_password = "" EOF ``` -#### Terraform を使ってインフラを構築 +#### Use Terraform to create infrastructure -コマンドを実行する前に、[variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf)に目を通し、このディレクトリに`terraform.tfvars`というファイルを作成します(または、前のステップで作成したものを修正します)。 デフォルトを上書きしたい、あるいは値を設定したい各変数について、`terraform.tfvars`に設定を入力します。 +Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. -- 以下のコマンドを実行して、インフラを作成します。 +- Run the following commands to create the infrastructure. ```sh -# 必要なプラグインのインストール +# Install required plugins terraform init -# 作成されるリソースのプランを見る +# View plan for resources to be created terraform plan -# リソースの作成(最大で30分程度かかる見込みです) +# Create the resources (expect it to take up to 30 minutes) terraform apply ``` -`kubectl apply -k $dir`ですべてのリソースをデプロイします。 +Download credentials for the new cluster into `~/.kube/config` and set it as your default context. ```sh gcloud container clusters get-credentials $indexer @@ -284,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### インデクサー用の Kubernetes コンポーネントの作成 +#### Creating the Kubernetes components for the indexer -- `k8s/overlays`ディレクトリを新しいディレクトリ`$dir,`にコピーし、`$dir/kustomization.yaml`内の`bases`エントリが`k8s/base`ディレクトリを指すように調整します。 +- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. -- `$dir` にあるすべてのファイルを読み、コメントに示されている値を調整します。 +- Read through all the files in `$dir` and adjust any values as indicated in the comments. Deploy all resources with `kubectl apply -k $dir`. -### グラフノード +### Graph Node -[グラフノード](https://github.com/graphprotocol/graph-node)はオープンソースの Rust 実装で、Ethereum ブロックチェーンをイベントソースにして、GraphQL エンドポイントでクエリ可能なデータストアを決定論的に更新します。 開発者は、サブグラフを使ってスキーマを定義し、ブロックチェーンから供給されるデータを変換するためのマッピングセットを使用します。 グラフノードは、チェーン全体の同期、新しいブロックの監視、GraphQL エンドポイント経由での提供を処理します。 +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. -#### ソースからのスタート +#### Getting started from source -#### インストールの前提条件 +#### Install prerequisites - **Rust** @@ -306,7 +307,7 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Ubuntu ユーザーのための追加要件** - グラフノードを Ubuntu 上で動作させるためには、いくつかの追加パッケージが必要になります。 +- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config @@ -314,7 +315,7 @@ sudo apt-get install -y clang libpg-dev libssl-dev pkg-config #### Setup -1. PostgreSQL データベースサーバを起動します。 +1. Start a PostgreSQL database server ```sh initdb -D .postgres @@ -322,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. [グラフノード Graph Node](https://github.com/graphprotocol/graph-node)のリポジトリをクローンし、cargo build を実行してソースをビルドします。 +2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` -3. 全ての依存関係の設定が完了したら、グラフノードを起動します: +3. Now that all the dependencies are setup, start the Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -333,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Docker の使用 +#### Getting started using Docker -#### 前提条件 +#### Prerequisites -- **イーサリアムノード** - デフォルトでは、docker compose setup は mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545)を使ってホストマシン上のイーサリアムノードに接続します。 このネットワーク名と URL は、`docker-compose.yaml`を更新することで置き換えることができます。 +- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. #### Setup -1. Graph Node をクローンし、Docker ディレクトリに移動します。 +1. Clone Graph Node and navigate to the Docker directory: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. Linux ユーザーのみ - 付属のスクリプトを使って、`docker-compose.yaml`の中で`host.docker.internal`の代わりにホストの IP アドレスを使用します: +2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: ```sh ./setup.sh ``` -3. Ethereum のエンドポイントに接続し、ローカルの Graph Node を起動します: +3. Start a local Graph Node that will connect to your Ethereum endpoint: ```sh docker-compose up ``` -### インデクサーコンポーネント +### Indexer components -ネットワークへの参加を成功させるためには、ほぼ常に監視と対話を行う必要があるため、Indexers のネットワークへの参加を促進するための一連の Typescript アプリケーションを構築しました。 インデクサーには 3 つのコンポーネントがあります: +To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: -- **Indexer agent** - ネットワークとインデクサー自身のインフラを監視し、どのサブグラフ・デプロイメントがインデキシングされ、チェーンに割り当てられるか、またそれぞれにどれだけの量が割り当てられるかを管理します。 +- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. -- **Indexer service** - 外部に公開する必要のある唯一のコンポーネントで、サブグラフのクエリをグラフノードに渡し、クエリの支払いのための状態チャンネルを管理し、重要な意思決定情報をゲートウェイなどのクライアントに共有します。 +- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. -- **インデクサー CLI** - インデクサーエージェントを管理するためのコマンドラインインターフェースです。 インデクサーがコストモデルやインデクシングルールを管理するためのもの。 +- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. -#### はじめに +#### Getting started -インデクサーエージェントとインデクサーサービスは、グラフノードインフラストラクチャーと同居している必要があります。 ここでは、NPM パッケージやソースを使ってベアメタル上で実行する方法と、Google Cloud Kubernetes Engine 上で kubernetes や docker を使って実行する方法を説明します。 これらの設定例があなたのインフラに適用できない場合は、コミュニティガイドを参照するか、[Discord](https://thegraph.com/discord)でお問い合わせください。 インデクサーコンポーネントを起動する前に、[プロトコルのステーク](/indexing#stake-in-the-protocol) を忘れないでください。 +The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! -#### NPM パッケージから +#### From NPM packages ```sh npm install -g @graphprotocol/indexer-service @@ -397,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### ソース +#### From source ```sh # From Repo root directory @@ -417,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Docker の使用 +#### Using docker -- レジストリからイメージを引き出す +- Pull images from the registry ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -**注**: コンテナの起動後、インデクサーサービスは[http://localhost:7600](http://localhost:7600)でアクセスでき、インデクサーエージェントは[http://localhost:18000/](http://localhost:18000/)で インデクサー管理 API を公開しているはずです。 +Or build images locally from source ```sh # Indexer service @@ -441,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- コンポーネントの実行 +- Run the components ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -[Google Cloud で Terraform を使ってサーバーインフラを構築するのセクション ](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) を参照してください。 +**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). -#### K8s と Terraform の使用 +#### Using K8s and Terraform -Indexer CLI は、[`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli)のプラグインで、ターミナルから`graph indexer`でアクセスできます。 +See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section -#### 使用方法 +#### Usage -> **注**:全てのランタイム設定変数は、起動時にコマンドのパラメーターとして適用するか、`COMPONENT_NAME_VARIABLE_NAME`(例:`INDEXER_AGENT_ETHEREUM`)という形式の環境変数を使用することができます。 +> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). -#### インデクサーエージェント +#### Indexer agent ```sh graph-indexer-agent start \ @@ -486,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### インデクサーサービス +#### Indexer service ```sh SERVER_HOST=localhost \ @@ -512,44 +513,44 @@ graph-indexer-service start \ | pino-pretty ``` -#### インデクサー CLI +#### Indexer CLI -インデクサーがプロトコルに GRT をステークすると、[indexer components](/indexing#indexer-components)を起動し、ネットワークとのやりとりを始めることができます。 +The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer CLI によるインデクサー管理 +#### Indexer management using indexer CLI -インデクサエージェントは、インデクサーに代わって自律的にネットワークと対話するために、インデクサーからの入力を必要とします。 インデクサー・エージェントの動作を定義するためのメカニズムが**インデキシングルール**です。 インデクサーは、**インデキシングルール**を使用して、インデックスを作成してクエリを提供するサブグラフを選択するための特定の戦略を適用することができます。 ルールは、エージェントが提供する GraphQL API を介して管理され、Indexer Management API と呼ばれています。 **Indexer Management API**を操作するための推奨ツールは、 **Graph CLI**の拡張である**Indexer CLI**です。 +The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. -#### 使用方法 +#### Usage -**Indexer CLI**は、通常ポート・フォワーディングを介してインデクサー・エージェントに接続するため、CLI が同じサーバやクラスタ上で動作する必要はありません。 ここでは CLI について簡単に説明します。 +The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. -- `graph indexer connect ` - インデクサー管理 API に接続します。 通常、サーバーへの接続はポートフォワーディングによって開かれ、CLI をリモートで簡単に操作できるようになります。 (例:`kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - 1 つまたは複数のインデキシングルールを取得します。 ``に `all` を指定すると全てのルールを取得し、`global` を指定するとグローバルなデフォルトを取得します。 追加の引数`--merged` を使用すると、ディプロイメント固有のルールをグローバル ルールにマージするように指定できます。 これがインデクサー・エージェントでの適用方法です。 +- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. -- `graph indexer rules set [options] ...` - 1 つまたは複数のインデキシング規則を設定します。 +- `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - 利用可能な場合はサブグラフ配置のインデックス作成を開始し、`decisionBasis`を`always`に設定するので、インデクサー・エージェントは常にインデキシングを選択します。 グローバル ルールが always に設定されている場合、ネットワーク上のすべての利用可能なサブグラフがインデックス化されます。 +- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. -- `graph indexer rules stop [options] ` - 配置のインデックス作成を停止し、`decisionBasis`を never に設定することで、インデックスを作成する配置を決定する際にこの配置をスキップします。 +- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. -- `graph indexer rules maybe [options] ` - 配置の`thedecisionBasis` を`rules`に設定し、インデクサーエージェントがインデキシングルールを使用して、この配置にインデックスを作成するかどうかを決定するようにします。 +- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. -出力にルールを表示するすべてのコマンドは、`-output`引数を使用して、サポートされている出力形式(`table`, `yaml`, and `json`) の中から選択できます。 +All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. -#### インデキシングルール +#### Indexing rules -インデキシングルールは、グローバルなデフォルトとして、または ID を使用して特定のサブグラフデプロイメントに適用できます。 `deployment`と`decisionBasis`フィールドは必須で、その他のフィールドはすべてオプションです。 インデキシングルールが`decisionBasis`として`rules` を持つ場合、インデクサー・エージェントは、そのルール上の非 NULL の閾値と、対応する配置のためにネットワークから取得した値を比較します。 サブグラフデプロイメントがいずれかのしきい値以上(または以下)の値を持つ場合、それはインデキシングのために選択されます。 +Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -例えば、グローバル ルールの`minStake`が**5**(GRT) の場合、5(GRT) 以上のステークが割り当てられているサブグラフデプロイメントは、インデックスが作成されます。 しきい値ルールには、 `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`があります。 +For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. -データモデル +Data model: ```graphql type IndexingRule { @@ -572,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### コストモデル +#### Cost models -コストモデルは、マーケットやクエリ属性に基づいて、クエリの動的な価格設定を行います。 インデクサーサービスは、クエリに応答する予定の各サブグラフのコストモデルをゲートウェイと共有します。 一方、ゲートウェイはコストモデルを使用して、クエリごとにインデクサーの選択を決定し、選択されたインデクサーと支払いの交渉を行います。 +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. #### Agora -Agora 言語は、クエリのコストモデルを宣言するための柔軟なフォーマットを提供します。 Agora のコストモデルは、GraphQL クエリのトップレベルのクエリごとに順番に実行される一連のステートメントです。 各トップレベルのクエリに対して、それにマッチする最初のステートメントがそのクエリの価格を決定します。 +The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. -ステートメントは、GraphQL クエリのマッチングに使用される述語と、評価されると decimal GRT でコストを出力するコスト式で構成されます。 クエリの名前付き引数の位置にある値は、述語の中に取り込まれ、式の中で使用されます。 また、グローバルを設定し、式のプレースホルダーとして代用することもできます。 +A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. -上記モデルを用いたクエリのコスト計算例: +Example cost model: ``` # This statement captures the skip value, @@ -595,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -コストモデルの例: +Example query costing using the above model: | Query | Price | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### コストモデルの適用 +#### Applying the cost model -コストモデルは Indexer CLI を通じて適用され、それをインデクサー・エージェントの Indexer Management API に渡してデータベースに格納します。 その後、インデクサーサービスがこれを受け取り、ゲートウェイから要求があるたびにコスト・モデルを提供します。 +Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## ネットワークとのインタラクション +## Interacting with the network -### プロトコルへのステーク +### Stake in the protocol -インデクサーとしてネットワークに参加するための最初のステップは、プロトコルを承認し、資金を拠出し、(オプションで)日常的なプロトコルのやり取りのためにオペレーターアドレスを設定することです。 \_ **注**: 本説明書ではコントラクトのやり取りに Remix を使用しますが、お好みのツールを自由にお使いください([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account)などが知られています) +The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ -健全なアロケーションは、インデクサーによって作成された後、4 つの状態を経ます。 +Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. -#### トークンの承認 +#### Approve tokens -1. ブラウザで[Remix app](https://remix.ethereum.org/)を開きます。 +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. `File Explorer`で**GraphToken.abi**というファイルを作成し、 [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json)を指定します。 +2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. `GraphToken.abi`を選択してエディタで開いた状態で、Remix のインターフェースの Deploy and `Run Transactions` セクションに切り替えます。 +3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. -4. 環境から[`Injected Web3`] を選択し、`Account`でインデクサーアドレスを選択します。 +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. GraphToken のコントラクトアドレスの設定 - `At Address`の横に GraphToken のコントラクトアドレス(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) を貼り付け、`At Address`ボタンをクリックして適用します。 +5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. -6. `approve(spender, amount)`関数を呼び出し、ステーキング契約を承認します。 `spender`にはステーキングコントラクトアドレス(`0xF55041E37E12cD407ad00CE2910B8269B01263b9`)を、`amount`にはステークするトークン(単位:wei)を記入します。 +6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). -#### トークンをステークする +#### Stake tokens -1. ブラウザで[Remix app](https://remix.ethereum.org/) を開きます。 +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. `File Explorer`で**Staking.abi**という名前のファイルを作成し、Staking ABI を指定します。 +2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. -3. エディタで`Staking.abi`を選択して開いた状態で、Remix インターフェースの`Deploy` and `Run Transactions`セクションに切り替えます。 +3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. -4. 環境から[`Injected Web3`] を選択し、`Account`でインデクサーアドレスを選択します。 +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. Staking contract address の設定 - `At Address`の横に Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) を貼り付け、 `At Address`ボタンをクリックして適用します。 +5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. -6. `stake()`を呼び出して、GRT をプロトコルにステークします。 +6. Call `stake()` to stake GRT in the protocol. -7. (オプション)インデクサーは、資金を管理する鍵と、サブグラフへの割り当てや(有料の)クエリの提供などの日常的な動作を行う鍵とを分離するために、別のアドレスをインデクサインフラストラクチャのオペレーターとして承認することができます。 オペレーターを設定するには、オペレーターのアドレスを指定して`setOperator()`をコールします。 +7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (オプション) 報酬の分配を制御し、デリゲータを戦略的に引き付けるために、 インデクサーは indexingRewardCut (parts per million)、 queryFeeCut (parts per million)、 cooldownBlocks (number of blocks) を更新することで、 デリゲーションパラメータを更新することができます。 これを行うには`setDelegationParameters()`をコールします。 次の例では、クエリフィーカットをクエリリベートの 95%をインデクサーに、5%をデリゲーターに分配するように設定し、インデクサーリワードカットをインデキシング報酬の 60%をインデクサーに、40%をデリゲーターに分配するよう設定し、`thecooldownBlocks` 期間を 500 ブロックに設定しています。 +8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) ``` -### アロケーションの寿命 +### The life of an allocation -インデクサーによって作成された後、健全なアロケーションは4つの状態を経ます。 +After being created by an indexer a healthy allocation goes through four states. -- **Active**- オンチェーンでアロケーションが作成されると(allocateFrom())、それは**active**であるとみなされます。 インデクサー自身やデリゲートされたステークの一部がサブグラフの配置に割り当てられ、これによりインデクシング報酬を請求したり、そのサブグラフの配置のためにクエリを提供したりすることができます。 インデクサエージェントは、インデキシングルールに基づいて割り当ての作成を管理します。 +- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. -- **Closed** - インデクサーは、1 エポックが経過した時点で自由に割り当てをクローズすることができます([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) また、インデクサエージェントは、**maxAllocationEpochs**(現在は 28 日)が経過した時点で自動的に割り当てをクローズします。 割り当てが有効な POI(Proof of Indexing)とともにクローズされると、そのインデクサー報酬がインデクサーとそのデリゲーターに分配されます(詳細は下記の「報酬の分配方法」を参照してください) +- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). -- **Finalized** - 割り当てがクローズすると、争議期間が設けられます。 その後、割り当てが**finalized**したとみなされ、クエリフィーのリベートを請求することができます(claim()) インデクサーエージェントは、ネットワークを監視して**finalized** した割り当てを検出し、設定可能な(オプションの)しきい値 **—-allocation-claim-threshold**を超えていれば、それを請求できます。 +- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. -- **請求** - アロケーションの最終状態で、アクティブなアロケーションとしての期間が終了し、全ての適格な報酬が配布され、クエリ料の払い戻しが請求されます。 +- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. From cf5d9e0290d1b947d07f9f207eb2365e37c52127 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:42 -0500 Subject: [PATCH 102/241] New translations indexing.mdx (Spanish) --- pages/es/indexing.mdx | 390 +++++++++++++++++++++--------------------- 1 file changed, 195 insertions(+), 195 deletions(-) diff --git a/pages/es/indexing.mdx b/pages/es/indexing.mdx index 460c64005f05..398c746cbd93 100644 --- a/pages/es/indexing.mdx +++ b/pages/es/indexing.mdx @@ -4,47 +4,47 @@ title: indexación import { Difficulty } from '@/components' -Los Indexadores son operadores de nodos en The Graph Network que stakean Graph Tokens (GRT) para proporcionar servicios de indexación y procesamiento de consultas. Los Indexadores obtienen tarifas de consulta y recompensas de indexación por sus servicios. También obtienen ganacias de un pool de reembolso que se comparte con todos los contribuyentes de la red en proporción a su trabajo, siguiendo la idea de Function Rebate por parte de Cobbs-Douglas. +Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. -Los GRT que se bloquean (en stake) dentro del protocolo están sujetos a un período de descongelación y pueden ser reducidos si los Indexadores son maliciosos y entregan datos incorrectos a las aplicaciones o si indexan información incorrecta. A los Indexadores también se les puede asignar participaciones por parte de los Delegadores, quienes buscan contribuir a la red. +GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. -Los Indexadores seleccionan subgrafos para indexar basados en la señal de curación del subgrafo, donde los curadores acuñan sus GRT para indicar qué subgrafos son de mejor calidad y deben tener prioridad para ser indexados. Los consumidores (por ejemplo, aplicaciones, clientes) también pueden establecer parámetros para los cuales los Indexadores procesan consultas para sus subgrafos y establecen preferencias para el precio asignado a cada consulta. +Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. -## Preguntas frecuentes +## FAQ -### ¿Cuál es la participación mínima requerida (stake) para ser Indexador en la red? +### What is the minimum stake required to be an indexer on the network? -El stake mínimo para un indexador es actualmente de 100.000 GRT. +The minimum stake for an indexer is currently set to 100K GRT. -### ¿Cuáles son las fuentes de ingresos de un indexador? +### What are the revenue streams for an indexer? -** Descuentos en las tarifas de consulta**: Pagos por atender consultas en la red. Estos pagos están asignados a través de unos canales entre el Indexador y un gateway. Cada solicitud de consulta de una puerta de enlace contiene un pago y la respuesta correspondiente una prueba de la validez del resultado de la consulta. +**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Recompensas de indexación**: Generadas a través de una inflación anual del protocolo equivalente al 3% , las recompensas de indexación se distribuyen a los indexadores que indexan las implementaciones de subgrafos para la red. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. -### ¿Cómo se distribuyen las recompensas? +### How are rewards distributed? -Las recompensas de indexación provienen de la inflación del protocolo, que se establece en una emisión anual del 3%. Se distribuyen en subgrafos según la proporción de toda la señal de curación en cada uno, luego se distribuyen proporcionalmente a los indexadores en función de su stake asignado en ese subgrafo. ** Una asignación debe cerrarse con una prueba de indexación (POI) válida que cumpla con los estándares establecidos por la carta de arbitraje para ser elegible dentro de las recompensas.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -La comunidad ha creado numerosas herramientas para calcular las recompensas; encontrarás una colección de ellos organizados en la [colección de herramientas creeadas por la Comunidad](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). También puedes encontrar una lista actualizada de herramientas en los canales de #delegators e #indexers en el [ servidor de Discord](https://discord.gg/vtvv7FP). +Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). -### ¿Qué es una prueba de indexación (POI)? +### What is a proof of indexing (POI)? -POI se utilizan en la red para verificar que un indexador está indexando los subgrafos en los que ha asignado. Se debe enviar un POI para el primer bloque del ciclo actual al cerrar una asignación para que esa asignación sea elegible para las recompensas de indexación. Un POI para un bloque, es un resumen de todas las transacciones de las entidades involucradas en la implementación de un subgrafo específico e incluyendo ese bloque. +POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. -### ¿Cuándo se distribuyen las recompensas de indexación? +### When are indexing rewards distributed? -Las asignaciones acumulan recompensas continuamente mientras están activas. Los indexadores recogen las recompensas y las distribuyen cada vez que se cierran sus asignaciones. Eso sucede ya sea manualmente, siempre que el indexador quiera forzar el cierre, o después de 28 ciclos un delegador puede cerrar la asignación para el indexador, pero esto da como resultado que no se generen recompensas. 28 ciclos es la duración máxima de la asignación (en este momento, un ciclo dura aproximadamente 24 h). +Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). -### ¿Se pueden monitorear las recompensas pendientes del indexador? +### Can pending indexer rewards be monitored? -Muchos de los paneles creados por la comunidad incluyen valores de recompensas pendientes y se pueden verificar fácilmente de forma manual siguiendo estos pasos: +The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. -Usa Etherscan para llamar `getRewards()`: +Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Consulta el [ subgrafo de la red principal](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) para obtener los ID de todas las asignaciones activas: +1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -60,135 +60,135 @@ query indexerAllocations { } ``` -Utiliza Etherscan para solicitar el `getRewards()`: +Use Etherscan to call `getRewards()`: -- Navega a través de [la interfaz de Etherscan para ver el contrato de recompensas](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* Para llamar `getRewards()`: - - Eleva el **10. getRewards** dropdown. - - Introduce el **allocationID** en la entrada. - - Presiona el botón de **Query**. +* To call `getRewards()`: + - Expand the **10. getRewards** dropdown. + - Enter the **allocationID** in the input. + - Click the **Query** button. -### ¿Qué son las disputas y dónde puedo verlas? +### What are disputes and where can I view them? -Las consultas y asignaciones del Indexador se pueden disputar en The Graph durante el período de disputa. El período de disputa varía según el tipo de disputa. Las consultas tienen una ventana de disputa de 7 ciclos, mientras que las asignaciones tienen 56 ciclos. Una vez transcurridos estos períodos, no se pueden abrir disputas contra asignaciones o consultas. Cuando se abre una disputa, los Fishermen requieren un depósito mínimo de 10,000 GRT, que permanecerá bloqueado hasta que finalice la disputa y se haya dado una resolución. Los Fishermen (o pescadores) son todos los participantes de la red que abren disputas. +Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. -Las disputas se pueden ver en la interfaz de usuario, en la página de perfil de un Indexador, en la pestaña `Disputas`. +Disputes have **three** possible outcomes, so does the deposit of the Fishermen. -- Si se rechaza la disputa, los GRT depositados por los Fishermen se quemarán y el Indexador en disputa no será recortado. -- Si la disputa se resuelve como empate, se devolverá el depósito de los Fishermen y no se recortará al indexador en disputa. -- Si la disputa es aceptada, los GRT depositados por los Fishermen será devuelto, el Indexador en disputa será recortado y los Fishermen ganarán el 50% de los GRT recortados. +- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. +- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. +- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. -Las disputas se podran visualizar en la interfaz correspondiente al perfil del indexador en la pestaña de `disputas`. +Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. -### ¿Qué son los reembolsos de tarifas de consulta y cuándo se distribuyen? +### What are query fee rebates and when are they distributed? -La puerta de enlace (gateway) recoge las tarifas de consulta cada vez que se cierra una asignación y se acumulan en el pool de reembolsos de tarifas de consulta del subgrafo. El pool de reembolsos está diseñado para alentar a los Indexadores a asignar participación en una proporción aproximada del monto de tarifas de consulta que ganan para la red. La parte de las tarifas de consulta en el pool que se asigna a un indexador en particular se calcula mediante la Función de Producción Cobbs-Douglas; el monto distribuido por indexador es una función de sus contribuciones al pool y su asignación de participación (stake) en el subgrafo. +Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. -Una vez que se ha cerrado una asignación y ha pasado el período de disputa, los reembolsos están disponibles para ser reclamados por el indexador. Al reclamar, los reembolsos de la tarifa de consulta se distribuyen al indexador y sus delegadores en función del recorte de la tarifa de consulta y las proporciones del pool de delegación. +Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. -### ¿Qué es el recorte de la tarifa de consulta y el recorte de la recompensa de indexación? +### What is query fee cut and indexing reward cut? -Los valores `queryFeeCut` y `indexingRewardCut` son parámetros de delegación que el Indexador puede establecer junto con cooldownBlocks para controlar la distribución de GRT entre el indexador y sus delegadores. Consulta los últimos pasos en [Staking en el protocolo](/indexing#stake-in-the-protocol) para obtener instrucciones sobre cómo configurar los parámetros de delegación. +The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. -- **queryFeeCut**: el porcentaje de los reembolsos de tarifas de consulta acumulados en un subgrafo que se distribuirá al indexador. Si se establece en 95%, el indexador recibirá el 95% del pool de reembolsos de la tarifa de consulta cuando se reclame una asignación y el otro 5% irá a los delegadores. +- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. -- **indexingRewardCut**: el porcentaje de las recompensas de indexación acumuladas en un subgrafo que se distribuirá al indexador. Si se establece en 95%, el indexador recibirá el 95% del pool de recompensas de indexación cuando se cierre una asignación y los delegadores dividirán el otro 5%. +- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. -### ¿Cómo saben los indexadores qué subgrafos indexar? +### How do indexers know which subgraphs to index? -Los indexadores pueden diferenciarse aplicando técnicas avanzadas para tomar decisiones de indexación de subgrafos, pero para dar una idea general, discutiremos varias métricas clave que se utilizan para evaluar subgrafos en la red: +Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: -- **Señal de curación **: la proporción de señal de curación de la red aplicada a un subgrafo en particular es un buen indicador del interés en ese subgrafo, especialmente durante la fase de lanzamiento cuando el volumen de consultas aumenta. +- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. -- ** Tarifas de consulta recogidas**: Los datos históricos del volumen de tarifas de consulta recogidas para un subgrafo específico son un buen indicador de la demanda futura. +- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. -- ** Cantidad en staking**: Monitorear el comportamiento de otros indexadores u observar las proporciones de la participación total asignada a subgrafos específicos puede permitirle al indexador monitorear el lado de la oferta en busca de consultas de subgrafos para identificar subgrafos que los que la red muestra confianza o subgrafos que pueden mostrar una necesidad de mayor suministro. +- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. -- ** Subgrafos sin recompensas de indexación**: Algunos subgrafos no generan recompensas de indexación principalmente porque utilizan funciones no compatibles como IPFS o porque están consultando otra red fuera de la red principal. Verás un mensaje en un subgrafo si no genera recompensas de indexación. +- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. -### ¿Cuáles son los requisitos de hardware? +### What are the hardware requirements? -- **Pequeño**: Lo suficiente como para comenzar a indexar varios subgrafos, es probable que deba expandirse. -- **Estándar**: Configuración predeterminada, esto es lo que se usa en los manifiestos de implementación de k8s/terraform de ejemplo. -- **Medio**: Indexador de producción que admite 100 subgrafos y 200-500 solicitudes por segundo. -- **Grande**: Preparado para indexar todos los subgrafos utilizados actualmente y atender solicitudes para el tráfico relacionado. +- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. +- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. -| Configuración | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| ------------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| -| Pequeño | 4 | 8 | 1 | 4 | 16 | -| Estándar | 8 | 30 | 1 | 12 | 48 | -| Medio | 16 | 64 | 2 | 32 | 64 | -| Grande | 72 | 468 | 3,5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | -### ¿Cuáles son algunas de las precauciones de seguridad básicas que debe tomar un indexador? +### What are some basic security precautions an indexer should take? -- ** Billetera del operador**: Configurar una billetera del operador es una precaución importante porque permite que un indexador mantenga la separación entre sus claves que controlan la participación (stake) y las que tienen el control de las operaciones diarias. Consulta [Participación en el Protocolo](/indexing#stake-in-the-protocol) para obtener instrucciones. +- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. -- **Firewall**: Solo el servicio indexador debe exponerse públicamente y se debe prestar especial atención al bloqueo de los puertos de administración y el acceso a la base de datos: el punto final JSON-RPC de Graph Node (puerto predeterminado: 8030), el punto final de la API de administración del indexador (puerto predeterminado: 18000) y el punto final de la base de datos de Postgres (puerto predeterminado: 5432) no deben estar expuestos. +- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. -## Infraestructura +## Infrastructure -En el centro de la infraestructura de un indexador está el Graph Node que monitorea Ethereum, extrae y carga datos según una definición de subgrafo y lo sirve como una [GraphQL API](/about/introduction#how-the-graph-works). El Graph Node debe estar conectado a los puntos finales del nodo Ethereum EVM y al nodo IPFS para obtener datos; una base de datos PostgreSQL para su tienda; y componentes del indexador que facilitan sus interacciones con la red. +At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. -- **Base de datos PostgreSQL**: El almacén principal para Graph Node, aquí es donde se almacenan los datos del subgrafo. El servicio y el agente del indexador también utilizan la base de datos para almacenar datos del canal de estado, modelos de costos y reglas de indexación. +- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. -- **Endpoint de Ethereum**: Un punto final que expone una API Ethereum JSON-RPC. Esto puede tomar la forma de un solo cliente Ethereum o podría ser una configuración más compleja que equilibre la carga en varios. Es importante tener en cuenta que ciertos subgrafos requerirán capacidades particulares del cliente Ethereum, como el modo de archivo y la API de seguimiento. +- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. -- ***Nodo IPFS (versión inferior a 5)**: Los metadatos de implementación de Subgrafo se almacenan en la red IPFS. El Graph Node accede principalmente al nodo IPFS durante la implementación del subgrafo para obtener el manifiesto del subgrafo y todos los archivos vinculados. Los indexadores de la red no necesitan alojar su propio nodo IPFS, un nodo IPFS para la red está alojado en https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. -- **Servicio de indexador**: Gestiona todas las comunicaciones externas necesarias con la red. Comparte modelos de costos y estados de indexación, transfiere solicitudes de consulta desde la puerta de acceso (gateway) a Graph Node y administra los pagos de consultas a través de canales de estado con la puerta de acceso. +- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Agente indexador**: Facilita las interacciones de los indexadores en cadena, incluido el registro en la red, la gestión de implementaciones de subgrafos en sus Graph Node y la gestión de asignaciones. Servidor de métricas de Prometheus: los componentes Graph Node y el Indexer registran sus métricas en el servidor de métricas. +- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. -Nota: Para admitir el escalado ágil, se recomienda que las inquietudes de consulta e indexación se separen entre diferentes conjuntos de nodos: nodos de consulta y nodos de índice. +Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. -### Resumen de puertos +### Ports overview -> **Importante**: Ten cuidado con la exposición de los puertos públicamente; los **puertos de administración** deben mantenerse bloqueados. Esto incluye el Graph Node JSON-RPC y los extremos de administración del indexador que se detallan a continuación. +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. #### Graph Node -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| ------ | ---------------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------- | -| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | -| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | -| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | -#### Servicio de Indexador +#### Indexer Service -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| ------ | ----------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | Servidor HTTP GraphQL
(para consultas de subgrafo pagadas) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | -#### Agente Indexador +#### Indexer Agent -| Puerto | Objeto | Rutas | Argumento CLI | Variable de
Entorno | -| ------ | ----------------------------- | ----- | ------------------------- | --------------------------------------- | -| 8000 | API de gestión de indexadores | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | +| 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Configurar la infraestructura del servidor con Terraform en Google Cloud +### Setup server infrastructure using Terraform on Google Cloud -#### Instalar requisitos previos +#### Install prerequisites -- SDK de Google Cloud -- Herramienta de línea de comandos de Kubectl +- Google Cloud SDK +- Kubectl command line tool - Terraform -#### Crear un proyecto de Google Cloud +#### Create a Google Cloud Project -- Clona o navega hasta el repositorio del indexador. +- Clone or navigate to the indexer repository. -- Navega al directorio ./terraform, aquí es donde se deben ejecutar todos los comandos. +- Navigate to the ./terraform directory, this is where all commands should be executed. ```sh cd terraform ``` -- Autentícate con Google Cloud y crea un nuevo proyecto. +- Authenticate with Google Cloud and create a new project. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Usa la \[página de facturación\](página de facturación) de Google Cloud Console para habilitar la facturación del nuevo proyecto. +- Use the Google Cloud Console's billing page to enable billing for the new project. -- Crea una configuración de Google Cloud. +- Create a Google Cloud configuration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Habilita las API requeridas de Google Cloud. +- Enable required Google Cloud APIs. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Crea una cuenta de servicio. +- Create a service account. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Habilita el emparejamiento entre la base de datos y el clúster de Kubernetes que se creará en el siguiente paso. +- Enable peering between database and Kubernetes cluster that will be created in the next step. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Crea un archivo de configuración mínimo de terraform (actualiza según sea necesario). +- Create minimal terraform configuration file (update as needed). ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### Usa Terraform para crear infraestructura +#### Use Terraform to create infrastructure -Antes de ejecutar cualquier comando, lee [ variables.tf ](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) y crea un archivo `terraform.tfvars` en este directorio (o modifica el que creamos en el último paso). Para cada variable en la que deseas anular el valor predeterminado, o donde necesites establecer un valor, ingresa una configuración en `terraform.tfvars`. +Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. -- Ejecuta los siguientes comandos para crear la infraestructura. +- Run the following commands to create the infrastructure. ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -Implementa todos los recursos con `kubectl apply -k $dir`. +Download credentials for the new cluster into `~/.kube/config` and set it as your default context. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Crea los componentes de Kubernetes para el indexador +#### Creating the Kubernetes components for the indexer -- Copia el directorio `k8s/overlays` a un nuevo directorio `$dir,` y ajusta la entrada `bases` en `$dir/kustomization.yaml` para que apunte al directorio `k8s/base`. +- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. -- Lee todos los archivos en `$dir` y ajusta cualquier valor como se indica en los comentarios. +- Read through all the files in `$dir` and adjust any values as indicated in the comments. -Despliega todas las fuentes usando `kubectl apply -k $dir`. +Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) es una implementación de Rust de código abierto que genera eventos en la blockchain Ethereum para actualizar de manera determinista un almacén de datos que se puede consultar a través del Punto final GraphQL. Los desarrolladores usan subgrafos para definir su esquema, y ​​un conjunto de mapeos para transformar los datos provenientes de la blockchain y Graph Node maneja la sincronización de toda la cadena, monitorea nuevos bloques y sirve a través de un punto final GraphQL. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. -#### Empezar desde el origen +#### Getting started from source -#### Instalar Prerrequisitos +#### Install prerequisites - **Rust** @@ -307,15 +307,15 @@ Despliega todas las fuentes usando `kubectl apply -k $dir`. - **IPFS** -- **Requisitos adicionales para usuarios de Ubuntu**: Para ejecutar un nodo Graph en Ubuntu, es posible que se necesiten algunos paquetes adicionales. +- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Configurar +#### Setup -1. Inicia un servidor de base de datos PostgreSQL +1. Start a PostgreSQL database server ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clona el repositorio [Graph Node](https://github.com/graphprotocol/graph-node) y crea la fuente ejecutando `cargo build` +2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` -3. Ahora que todas las dependencias están configuradas, inicia el nodo Graph (Graph Node): +3. Now that all the dependencies are setup, start the Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Empezar usando Docker +#### Getting started using Docker -#### Prerrequisitos +#### Prerequisites -- ** nodo Ethereum**: De forma predeterminada, la configuración de composición de Docker utilizará la red principal: [http://host.docker.internal:8545](http://host.docker.internal:8545) para conectarse al nodo Ethereum en su máquina alojada. Puedes reemplazar este nombre de red y url actualizando `docker-compose.yaml`. +- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. -#### Configurar +#### Setup -1. Clona Graph Node y navega hasta el directorio de Docker: +1. Clone Graph Node and navigate to the Docker directory: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. Solo para usuarios de Linux: usa la dirección IP del host en lugar de `host.docker.internal` en `docker-compose.yaml`usando el texto incluido: +2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: ```sh ./setup.sh ``` -3. Inicia un Graph Node local que se conectará a su punto final de Ethereum: +3. Start a local Graph Node that will connect to your Ethereum endpoint: ```sh docker-compose up ``` -### Componentes de Indexador +### Indexer components -Para participar con éxito en la red se requiere una supervisión e interacción casi constantes, por lo que hemos creado un conjunto de aplicaciones de Typecript para facilitar la participación de una red de indexadores. Hay tres componentes de indexador: +To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: -- ** Agente indexador**: el agente monitorea la red y la propia infraestructura del indexador y administra qué implementaciones de subgrafos se indexan y asignan en la cadena y cuánto se asigna a cada uno. +- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. -- **Servicio de indexación**: El único componente que debe exponerse externamente, el servicio transfiere las consultas de subgrafo al graph node, administra los canales de estado para los pagos de consultas, comparte información importante para la toma de decisiones a clientes como las puertas de acceso (gateway). +- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. -- **CLI de Indexador**: La interfaz de línea de comandos para administrar el agente indexador. Permite a los indexadores administrar modelos de costos y reglas de indexación. +- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. -#### Comenzar +#### Getting started -El agente indexador y el servicio indexador deben ubicarse junto con su infraestructura Graph Node. Hay muchas formas de configurar entornos de ejecución virtual para tus componentes de indexador; aquí explicaremos cómo ejecutarlos en baremetal utilizando paquetes o fuente NPM, o mediante kubernetes y docker en Google Cloud Kubernetes Engine. Si estos ejemplos de configuración no se traducen bien en tu infraestructura, es probable que haya una guía de la comunidad de referencia, ¡ven a saludar en [Discord](https://thegraph.com/discord)! Recuerda [stake en el protocolo](/indexing#stake-in-the-protocol) antes de iniciar tus componentes de indexador! ¡Recuerda hacer [staking en el protocolo](/indexing#stake-in-the-protocol) antes de establecer tus componentes como indexer! +The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! -#### Paquetes de NPM +#### From NPM packages ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### Fuente +#### From source ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Uso de Docker +#### Using docker -- Extrae imágenes del registro +- Pull images from the registry ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -**NOTA**: Después de iniciar los contenedores, se debe poder acceder al servicio de indexación en [http://localhost:7600](http://localhost:7600) y el agente indexador debería exponer la API de administración del indexador en [ http://localhost:18000/](http://localhost:18000/). +Or build images locally from source ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- Ejecuta los componentes +- Run the components ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -Consulta la sección [Configuración de la infraestructura del servidor con Terraform en Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) +**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). -#### Uso de K8s y Terraform +#### Using K8s and Terraform -Indexer CLI es un complemento para [ `@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accesible en la terminal en `graph indexer`. +See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section -#### Uso +#### Usage -> **NOTA**: Todas las variables de configuración de tiempo de ejecución se pueden aplicar como parámetros al comando en el inicio o usando variables de entorno con el formato `COMPONENT_NAME_VARIABLE_NAME`(ej. `INDEXER_AGENT_ETHEREUM`). +> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). -#### Agente Indexador +#### Indexer agent ```sh graph-indexer-agent start \ @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Servicio de Indexador +#### Indexer service ```sh SERVER_HOST=localhost \ @@ -515,42 +515,42 @@ graph-indexer-service start \ #### Indexer CLI -Indexer CLI es un complemento para [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accesible en la terminal de `graph indexer`. +The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Gestión del indexador mediante Indexer CLI +#### Indexer management using indexer CLI -El agente indexador necesita información de un indexador para interactuar de forma autónoma con la red en nombre del indexador. El mecanismo para definir el comportamiento del agente indexador son las **reglas de indexación**. Con las **reglas de indexación**, un indexador puede aplicar su estrategia específica para seleccionar subgrafos para indexar y atender consultas. Las reglas se administran a través de una API GraphQL proporcionada por el agente y conocida como API de administración de indexadores (Indexer Management API). La herramienta sugerida para interactuar con la **API de Administración del Indexador** es la **Indexer CLI**, una extensión de **Graph CLI**. +The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. -#### Uso +#### Usage -La **CLI del Indexador** se conecta al agente del indexador, normalmente a través del reenvío de puertos, por lo que no es necesario que CLI se ejecute en el mismo servidor o clúster. Para ayudarte a comenzar y proporcionar algo de contexto, la CLI se describirá brevemente aquí. +The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. -- `graph indexer connect ` - Conéctate a la API de administración del indexador. Normalmente, la conexión al servidor se abre mediante el reenvío de puertos, por lo que la CLI se puede operar fácilmente de forma remota. (Ejemplo: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Obtén una o más reglas de indexación usando `all` `` para obtener todas las reglas, o `global` para obtener los valores globales predeterminados. Se puede usar un argumento adicional `--merged` para especificar que las reglas específicas de implementación se fusionan con la regla global. Así es como se aplican en el agente indexador. +- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. -- `graph indexer rules set [options] ...` - Establece una o más reglas de indexación. +- `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Empieza a indexar una implementación de subgrafo si está disponible y establece su `decisionBasis` en `always`, por lo que el agente indexador siempre elegirá indexarlo. Si la regla global se establece en siempre, se indexarán todos los subgrafos disponibles en la red. +- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. -- `graph indexer rules stop [options] ` - Dejq de indexar una implementación y establece tu `decisionBasis` en never (nunca), por lo que omitirá esta implementación cuando decida qué implementaciones indexar. +- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. -- `graph indexer rules maybe [options] ` - Configura `thedecisionBasis` para una implementación en `rules`, de modo que el agente indexador use las reglas de indexación para decidir si indexar esta implementación. +- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. -Todos los comandos que muestran reglas en la salida pueden elegir entre los formatos de salida admitidos (`table`, `yaml` y `json`) utilizando `-output` argument. +All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. -#### Reglas de Indexación +#### Indexing rules -Las reglas de indexación se pueden aplicar como valores predeterminados globales o para implementaciones de subgrafos específicos usando sus ID. Los campos `deployment` y `decisionBasis` son obligatorios, mientras que todos los demás campos son opcionales. Cuando una regla de indexación tiene `rules` como `decisionBasis`, el agente indexador comparará los valores de umbral no nulos en esa regla con los valores obtenidos de la red para la implementación correspondiente. Si la implementación del subgrafo tiene valores por encima (o por debajo) de cualquiera de los umbrales, se elegirá para la indexación. +Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -Por ejemplo, si la regla global tiene un `minStake` de **5** (GRT), cualquier implementación de subgrafo que tenga más de 5 (GRT) de participación (stake) asignado a él será indexado. Las reglas de umbral incluyen `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake` y `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. -Modelo de Datos: +Data model: ```graphql type IndexingRule { @@ -573,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### Modelos de Costos +#### Cost models -Los modelos de costos proporcionan precios dinámicos para consultas basadas en el mercado y los atributos de la consulta. El Servicio de Indexación comparte un modelo de costos con las puertas de enlace para cada subgrafo para el que pretenden responder a las consultas. Las puertas de enlace, a su vez, utilizan el modelo de costos para tomar decisiones de selección de indexadores por consulta y para negociar el pago con los indexadores elegidos. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. #### Agora -El lenguaje Agora proporciona un formato flexible para declarar modelos de costos para consultas. Un modelo de precios de Agora es una secuencia de declaraciones que se ejecutan en orden para cada consulta de nivel superior en una consulta GraphQL. Para cada consulta de nivel superior, la primera declaración que coincide con ella determina el precio de esa consulta. +The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. -Una declaración se compone de un predicado, que se utiliza para hacer coincidir consultas GraphQL, y una expresión de costo que, cuando se evalúa, genera un costo en GRT decimal. Los valores en la posición del argumento nombrado de una consulta pueden capturarse en el predicado y usarse en la expresión. Los globales también se pueden establecer y sustituir por marcadores de posición en una expresión. +A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. -Ejemplo de costos de consultas utilizando el modelo anterior: +Example cost model: ``` # This statement captures the skip value, @@ -596,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -Ejemplo de modelo de costo: +Example query costing using the above model: -| Consulta | Precio | +| Query | Price | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### Aplicando el modelo de costos +#### Applying the cost model -Los modelos de costos se aplican a través de la CLI de Indexer, que los pasa a la API de Administración de Indexador del agente indexador para almacenarlos en la base de datos. Luego, el Servicio del Indexador los recogerá y entregará los modelos de costos a las puertas de enlace siempre que los soliciten. +Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interactuar con la red +## Interacting with the network -### Participar en el protocolo +### Stake in the protocol -Los primeros pasos para participar en la red como Indexador son aprobar el protocolo, stakear fondos y (opcionalmente) configurar una dirección de operador para las interacciones diarias del protocolo. _ **Nota**: A los efectos de estas instrucciones, Remix se utilizará para la interacción del contrato, pero no dudes en utilizar la herramienta que elijas (\[OneClickDapp\](https: // oneclickdapp.com/), [ABItopic](https://abitopic.io/) y [MyCrypto](https://www.mycrypto.com/account) son algunas otras herramientas conocidas)._ +The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ -Después de ser creada por un indexador, una asignación saludable pasa por cuatro estados. +Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. -#### Aprobar tokens +#### Approve tokens -1. Abre la [aplicación Remix](https://remix.ethereum.org/) en un navegador +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. En el `File Explorer`, crea un archivo llamado **GraphToken.abi** con [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. Con `GraphToken.abi` seleccionado y abierto en el editor, cambia a la sección Implementar (Deploy) y `Run Transactions` en la interfaz Remix. +3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. -4. En entorno, selecciona `Injected Web3` y en `Account` selecciona tu dirección de indexador. +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. Establece la dirección del contrato GraphToken: pega la dirección del contrato GraphToken (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) junto a `At Address` y haz clic en el botón `At address` para aplicar. +5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. -6. Llame a la función `approve(spender, amount)` para aprobar el contrato de Staking. Completa `spender` con la dirección del contrato de Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) y `amount` con los tokens en stake (en wei). +6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). -#### Staking de tokens +#### Stake tokens -1. Abre la [aplicación Remix](https://remix.ethereum.org/) en un navegador +1. Open the [Remix app](https://remix.ethereum.org/) in a browser -2. En el `File Explorer`, crea un archivo llamado ** Staking.abi** con la ABI de staking. +2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. -3. Con `Staking.abi` seleccionado y abierto en el editor, cambia a la sección `Deploy` y `Run Transactions` en la interfaz Remix. +3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. -4. En entorno, selecciona `Injected Web3` y en `Account` selecciona tu dirección de indexador. +4. Under environment select `Injected Web3` and under `Account` select your indexer address. -5. Establece la dirección del contrato de staking - Pega la dirección del contrato de Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) junto a `At Address` y haz clic en el botón `At address` para aplicar. +5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. -6. Llama a `stake()` para bloquear GRT en el protocolo. +6. Call `stake()` to stake GRT in the protocol. -7. (Opcional) Los indexadores pueden aprobar otra dirección para que sea el operador de su infraestructura de indexación a fin de separar las claves que controlan los fondos de las que realizan acciones cotidianas, como la asignación en subgrafos y el servicio de consultas (pagadas). Para configurar el operador, llama a `setOperator()` con la dirección del operador. +7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Opcional) Para controlar la distribución de recompensas y atraer estratégicamente a los delegadores, los indexadores pueden actualizar sus parámetros de delegación actualizando su indexingRewardCut (partes por millón), queryFeeCut (partes por millón) y cooldownBlocks (número de bloques). Para hacerlo, llama a `setDelegationParameters()`. El siguiente ejemplo establece queryFeeCut para distribuir el 95% de los reembolsos de consultas al indexador y el 5% a los delegadores, establece indexingRewardCut para distribuir el 60% de las recompensas de indexación al indexador y el 40% a los delegadores, y establece `thecooldownBlocks` Periodo a 500 bloques. +8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) ``` -### La vida de una asignación +### The life of an allocation -Después de ser creada por un indexador, una asignación saludable pasa por cuatro fases. +After being created by an indexer a healthy allocation goes through four states. -- **Activo**: Una vez que se crea una asignación en la cadena (\[allocateFrom()\](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol # L873)) se considera **activo**. Una parte de la participación propia y/o delegada del indexador se asigna a una implementación de subgrafo, lo que le permite reclamar recompensas de indexación y atender consultas para esa implementación de subgrafo. El agente indexador gestiona la creación de asignaciones basadas en las reglas del indexador. +- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. -- **Cerrado**: Un indexador puede cerrar una asignación una vez que haya pasado 1 ciclo ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) o su agente indexador cerrará automáticamente la asignación después de **maxAllocationEpochs** (actualmente 28 días). Cuando una asignación se cierra con una prueba válida de indexación (POI), sus recompensas de indexación se distribuyen al indexador y sus delegadores (consulta "¿Cómo se distribuyen las recompensas?" A continuación para obtener más información). +- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). -- **Finalizada**: Una vez que se ha cerrado una asignación, hay un período de disputa después del cual la asignación se considera **finalizada** y los reembolsos de tarifas de consulta están disponibles para ser reclamados (claim()). El agente indexador supervisa la red para detectar asignaciones ** finalizadas** y las reclama si están por encima de un umbral configurable (y opcional), ** - -allocation-claim-threshold**. +- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. -- **Reclamado**: El estado final de una asignación; ha seguido su curso como una asignación activa, se han distribuido todas las recompensas elegibles y se han reclamado los reembolsos de las tarifas de consulta. +- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. From cd55414308c4c140fc27ecce8d455fef3a980215 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:43 -0500 Subject: [PATCH 103/241] New translations curating.mdx (Arabic) --- pages/ar/curating.mdx | 104 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/pages/ar/curating.mdx b/pages/ar/curating.mdx index 6e37a8776a6f..7f542ca5ebc8 100644 --- a/pages/ar/curating.mdx +++ b/pages/ar/curating.mdx @@ -2,102 +2,102 @@ title: (التنسيق) curating --- -المنسقون مهمون للاقتصاد اللامركزي في the Graph. يستخدمون معرفتهم بالنظام البيئي web3 للتقييم والإشارة ل Subgraphs والتي تفهرس بواسطة شبكة The Graph. من خلال المستكشف (Explorer)، يستطيع المنسقون (curators) عرض بيانات الشبكة وذلك لاتخاذ قرارات الإشارة. تقوم شبكة The Graph بمكافئة المنسقين الذين يشيرون إلى ال Subgraphs عالية الجودة بحصة من رسوم الاستعلام التي تولدها ال subgraphs. يتم تحفيز المنسقون(Curators) ليقومون بالإشارة بشكل مبكر. هذه الإشارات من المنسقين مهمة للمفهرسين ، والذين يمكنهم بعد ذلك معالجة أو فهرسة البيانات من ال subgraphs المشار إليها. +Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. -يمكن للمنسقين اتخاذ القرار إما بالإشارة إلى إصدار معين من Subgraphs أو الإشارة باستخدام الترحيل التلقائي auto-migrate. عند الإشارة باستخدام الترحيل التلقائي ، ستتم دائما ترقية حصص المنسق إلى أحدث إصدار ينشره المطور. وإذا قررت الإشارة إلى إصدار معين، فستظل الحصص دائما في هذا الإصدار المحدد. +When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. -تذكر أن عملية التنسيق محفوفة بالمخاطر. نتمنى أن تبذل قصارى جهدك وذلك لتنسق ال Subgraphs الموثوقة. إنشاء ال subgraphs لا يحتاج إلى ترخيص، لذلك يمكن للأشخاص إنشاء subgraphs وتسميتها بأي اسم يرغبون فيه. لمزيد من الإرشادات حول مخاطر التنسيق ، تحقق من[The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) +Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) -## منحنى الترابط 101 +## Bonding Curve 101 -أولا لنعد خطوة إلى الوراء. يحتوي كل subgraphs على منحنى ربط يتم فيه صك حصص التنسيق ، وذلك عندما يضيف المستخدم إشارة **للمنحنى**. لكل Subgraphs منحنى ترابط فريد من نوعه. يتم تصميم منحنيات الترابط بحيث يزداد بشكل ثابت سعر صك حصة التنسيق على Subgraphs ، وذلك مقارنة بعدد الحصص التي تم صكها. +First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. -![سعر السهم](/img/price-per-share.png) +![Price per shares](/img/price-per-share.png) -نتيجة لذلك ، يرتفع السعر بثبات ، مما يعني أنه سيكون شراء السهم أكثر تكلفة مع مرور الوقت. فيما يلي مثال لما نعنيه ، راجع منحنى الترابط أدناه: +As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: -![منحنى الترابط Bonding curve](/img/bonding-curve.png) +![Bonding curve](/img/bonding-curve.png) -ضع في اعتبارك أن لدينا منسقان يشتركان في Subgraph واحد: +Consider we have two curators that mint shares for a subgraph: -- المنسق (أ) هو أول من أشار إلى ال Subgraphs. من خلال إضافة 120000 GRT إلى المنحنى ، سيكون من الممكن صك 2000 سهم. -- تظهر إشارة المنسق "ب" على ال Subgraph لاحقا. للحصول على نفس كمية حصص المنسق "أ" ، يجب إضافة 360000 GRT للمنحنى. -- لأن كلا من المنسقين يحتفظان بنصف إجمالي اسهم التنسيق ، فإنهم سيحصلان على قدر متساوي من عوائد المنسقين. -- إذا قام أي من المنسقين بحرق 2000 من حصص التنسيق الخاصة بهم ،فإنهم سيحصلون على 360000 GRT. -- سيحصل المنسق المتبقي على جميع عوائد المنسق لهذ ال subgraphs. وإذا قام بحرق حصته للحصول علىGRT ، فإنه سيحصل على 120.000 GRT. -- ** TLDR: ** يكون تقييم أسهم تنسيق GRT من خلال منحنى الترابط ويمكن أن يكون متقلبا. هناك إمكانية لتكبد خسائر كبيرة. الإشارة في وقت مبكر يعني أنك تضع كمية أقل من GRT لكل سهم. هذا يعني أنك تكسب من عائدات المنسق لكل GRT أكثر من المنسقين المتأخرين لنفس ال subgraph. +- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. +- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. +- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. +- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. +- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. +- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. -بشكل عام ، منحنى الترابط هو منحنى رياضي يحدد العلاقة بين عرض التوكن وسعر الأصول. في الحالة المحددة لتنسيق ال subgraph ، ** يرتفع سعر كل سهم في ال subgraph مع كل توكن مستثمر ** ويقل السعر \*\* لكل سهم مع كل بيع للتوكن. +In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** -في حالة The Graph +In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. -## كيفية الإشارة +## How to Signal -الآن بعد أن غطينا الأساسيات حول كيفية عمل منحنى الترابط ،طريقة الإشارة على ال subgraph هي كالتالي. ضمن علامة التبويب "Curator" في "Graph Explorer" ، سيتمكن المنسقون من الإشارة وإلغاء الإشارة إلى بعض ال subgraphs بناء على إحصائيات الشبكة. للحصول على نظرة عامة خطوة بخطوة حول كيفية القيام بذلك في Explorer ،[انقر هنا](https://thegraph.com/docs/explorer) +Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) -يمكن للمنسق الإشارة إلى إصدار معين ل subgraph ، أو يمكنه اختيار أن يتم ترحيل migrate إشاراتهم تلقائيا إلى أحدث إصدار لهذا ال subgraph. كلاهما استراتيجيات سليمة ولها إيجابيات وسلبيات. +A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. -الإشارة إلى إصدار معين مفيدة بشكل خاص عند استخدام subgraph واحد بواسطة عدة dapps. قد يحتاج ال dapp إلى تحديث ال subgraph بانتظام بميزات جديدة. وقد يفضل dapp آخر استخدام إصدار subgraph أقدم تم اختباره جيدا. عند بداية التنسيق ، يتم فرض ضريبة بنسبة 1٪. +Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. -يمكن أن يكون ترحيل migration الإشارة تلقائيا إلى أحدث إصدار أمرا ذا قيمة لضمان استمرار تراكم رسوم الاستعلام. في كل مرة تقوم فيها بالتنسيق ، يتم فرض ضريبة تنسيق بنسبة 1٪. ستدفع أيضًا ضريبة تنسيق 0.5٪ على كل ترحيل. لا يُنصح مطورو ال Subgraph بنشر إصدارات جديدة بشكل متكرر - يتعين عليهم دفع ضريبة تنسيق بنسبة 0.5٪ على جميع أسهم التنسيق المرحلة تلقائيًا. +Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. -> ** ملاحظة **: العنوان الأول الذي يشير ل subgraph معين يعتبر هو المنسق الأول وسيتعين عليه القيام بأعمال gas أكثر بكثير من بقية المنسقين التاليين لأن المنسق الأول يهيئ توكن أسهم التنسيق، ويهيئ منحنى الترابط ، وكذلك ينقل التوكن إلى the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. -## ماذا تعني الإشارة لشبكة The Graph؟ +## What does Signaling mean for The Graph Network? -لكي يتمكن المستهلك من الاستعلام عن subgraph ، يجب أولا فهرسة ال subgraph. الفهرسة هي عملية يتم فيها النظر إلى الملفات، والبيانات، والبيانات الوصفية وفهرستها بحيث يمكن العثور على النتائج بشكل أسرع. يجب تنظيم بيانات ال subgraph لتكون قابلة للبحث فيها. +For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. -وبالتالي ، إذا قام المفهرسون بتخمين ال subgraphs التي يجب عليهم فهرستها ، فستكون هناك فرصة منخفضة في أن يكسبوا رسوم استعلام جيدة لأنه لن يكون لديهم طريقة للتحقق من ال subgraphs ذات الجودة العالية. أدخل التنسيق. +And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. -المنسقون بجعلون شبكة The Graph فعالة، والتأشير signaling هي العملية التي يستخدمها المنسقون لإعلام المفهرسين بأن ال subgraph جيدة للفهرسة ، حيث تتم إضافة GRT إلى منحنى الترابط ل subgraph. يمكن للمفهرسين أن يثقوا بإشارة المنسق لأنه عند الإشارة ، يقوم المنسقون بصك سهم تنسيق ال subgraph ، مما يمنحهم حق الحصول على جزء من رسوم الاستعلام المستقبلية التي ينشئها ال subgraph. إشارة المنسق يتم تمثيلها كتوكن ERC20 والتي تسمى (Graph Curation Shares (GCS. المنسقين الراغبين في كسب المزيد من رسوم الاستعلام عليهم إرسال الإشارة بـGRT إلى الـ subgraphs التي يتوقعون أنها ستولد تدفقا قويا للرسوم للشبكة.هناك ضريبة ودائع على المنسقين لتثبيط اتخاذ قرار يمكن أن يضر بسلامة الشبكة. يكسب المنسقون أيضا رسوم استعلام أقل إذا اختاروا التنسيق على subgraph منخفض الجودة ، حيث سيكون هناك عددا أقل من الاستعلامات لمعالجتها أو عددا أقل من المفهرسين لمعالجة هذه الاستعلامات. انظر إلى الرسم البياني أدناه! +Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! -![مخطط التأشير Signaling diagram](/img/curator-signaling.png) +![Signaling diagram](/img/curator-signaling.png) -يمكن للمفهرسين العثور على subgraphs لفهرستها وذلك بناء على إشارات التنسيق التي يرونها في The Graph Explorer (لقطة الشاشة أدناه). +Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). -![مستكشف subgraphs](/img/explorer-subgraphs.png) +![Explorer subgraphs](/img/explorer-subgraphs.png) -## المخاطر +## Risks -1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة. -2. رسوم التنسيق - عندما يشير المنسق إلى GRT على subgraph ، فإنه يتحمل ضريبة تنسيق بنسبة 1٪. يتم حرق هذه الرسوم ويودع الباقي في العرض الاحتياطي لمنحنى الترابط. -3. عندما يحرق المنسقون أسهمهم لسحب GRT ، سينخفض تقييم GRT للأسهم المتبقية. كن على علم بأنه في بعض الحالات ، قد يقرر المنسقون حرق أسهمهم ** كلها مرة واحدة **. قد تكون هذه الحالة شائعة إذا توقف مطور dapp عن الاصدار/ التحسين والاستعلام عن ال subgraph الخاص به أو في حالة فشل ال subgraph. نتيجة لذلك ، قد يتمكن المنسقون المتبقون فقط من سحب جزء من GRT الأولية الخاصة بهم. لدور الشبكة بمخاطر أقل انظر\[Delegators\] (https://thegraph.com/docs/delegating). -4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا. - - إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪. - - إذا أشرت إلى إصدار معين من subgraph وفشل ، فسيتعين عليك حرق أسهم التنسق الخاصة بك يدويا. لاحظ أنك قد تتلقى GRT أكثر أو أقل مما أودعته في البداية في منحنى التنسيق، وهي مخاطرة مرتبطة بكونك منسقا. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). +4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. -## الأسئلة الشائعة حول التنسيق +## Curation FAQs -### 1. ما هي النسبة المئوية لرسوم الاستعلام التي يكسبها المنسقون؟ +### 1. What % of query fees do Curators earn? -من خلال الإشارة لل subgraph ، سوف تكسب حصة من جميع رسوم الاستعلام التي يولدها هذا ال subgraph. تذهب 10٪ من جميع رسوم الاستعلام إلى المنسقين بالتناسب مع أسهم التنسيق الخاصة بهم. هذه الـ 10٪ خاضعة للقوانين. +By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. -### 2. كيف يمكنني تقرير ما إذا كان ال subgraph عالي الجودة لكي أقوم بالإشارة إليه؟ +### 2. How do I decide which subgraphs are high quality to signal on? -يعد العثور على ال subgraphs عالية الجودة مهمة معقدة ، ولكن يمكن التعامل معها بعدة طرق مختلفة. بصفتك منسقا، فأنت تريد البحث عن ال subgraphs الموثوقة والتي تؤدي إلى زيادة حجم الاستعلام. ال subgraph الجدير بالثقة يكون ذا قيمة إذا كان مكتملا ودقيقا ويدعم احتياجات بيانات ال dapp. قد يحتاج ال subgraph الذي تم تكوينه بشكل سيئ إلى المراجعة أو إعادة النشر ، وقد ينتهي به الأمر أيضًا إلى الفشل. من المهم للمنسقين القيام بمراجعة بنية أو كود ال subgraph من أجل تقييم ما إذا كان ال subgraph ذو قيمة أم لا. كنتيجة ل: +Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- يمكن للمنسقين استخدام فهمهم للشبكة لمحاولة التنبؤ كيف لل subgraph أن يولد حجم استعلام أعلى أو أقل في المستقبل -- يجب أن يفهم المنسقون أيضا المقاييس المتوفرة من خلال the Graph Explorer. المقاييس مثل حجم الاستعلام السابق ومن هو مطور ال subgraph تساعد في تحديد ما إذا كان ال subgraph يستحق الإشارة إليه أم لا. +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. -### 3. ما هي تكلفة ترقية ال subgraph؟ +### 3. What’s the cost of upgrading a subgraph? -ترحيل أسهم التنسيق الخاصة بك إلى إصدار subgraph جديد يؤدي إلى فرض ضريبة تنسيق بنسبة 1٪. يمكن للمنسقين الاشتراك في أحدث إصدار من ال subgraph. عندما يتم ترحيل أسهم المنسقين تلقائيا إلى إصدار جديد ، سيدفع المنسقون أيضا نصف ضريبة التنسيق ، أي. 0.5٪ ، لأن ترقية ال subgraphs هي إجراء متسلسل يكلف غاز gas. +Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. -### 4. كم مرة يمكنني ترقية ال subgraph الخاص بي؟ +### 4. How often can I upgrade my subgraph? -يفضل عدم ترقية ال subgraphs بشكل متكرر. ارجع للسؤال أعلاه لمزيد من التفاصيل. +It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. -### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟ +### 5. Can I sell my curation shares? -لا يمكن "شراء" أو "بيع" أسهم التنسيق مثل توكنات ERC20 الأخرى التي قد تكون على دراية بها. يمكن فقط صكها (إنشاؤها) أو حرقها (إتلافها) على طول منحنى الترابط ل subgraph معين. من خلال منحنى الترابط يتم تحديد مقدار GRT اللازمة لصك إشارة جديدة ، وكمية GRT التي تتلقاها عندما تحرق إشارتك الحالية. بصفتك منسقا، عليك أن تعرف أنه عندما تقوم بحرق أسهم التنسيق الخاصة بك لسحب GRT ، فيمكن أن ينتهي بك الأمر ب GRT أكثر أو أقل مما قمت بإيداعه في البداية. +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -لازلت مشوشا؟ راجع فيديو دليل التنسيق أدناه: +Still confused? Check out our Curation video guide below:
From b82e8a8cea234a80b1ddb6fe3a9f5ef6ff15f20c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:44 -0500 Subject: [PATCH 104/241] New translations delegating.mdx (Japanese) --- pages/ja/delegating.mdx | 81 +++++++++++++++++++++-------------------- 1 file changed, 41 insertions(+), 40 deletions(-) diff --git a/pages/ja/delegating.mdx b/pages/ja/delegating.mdx index 06c1297a5a4a..eb058d946234 100644 --- a/pages/ja/delegating.mdx +++ b/pages/ja/delegating.mdx @@ -2,91 +2,92 @@ title: デリゲーティング --- -デリゲーターは悪意の行動をしてもスラッシュされないが、デリゲーターにはデポジット税が課せられ、ネットワークの整合性を損なう可能性のある悪い意思決定を抑止します。 +Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. -## デリゲーターガイド +## Delegator Guide -このガイドでは、グラフネットワークで効果的なデリゲーターになるための方法を説明します。 デリゲーターは、デリゲートされたステークのすべてのインデクサーとともにプロトコルの収益を共有します。 デリゲーターは、複数の要素を考慮した上で、最善の判断でインデクサーを選ばなければなりません。 このガイドでは、メタマスクの適切な設定方法などについては説明しません。このガイドには3つのセクションがあります。 There are three sections in this guide: +This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: -- グラフネットワークでトークンをデリゲートすることのリスク -- デリゲーターとしての期待リターンの計算方法 -- グラフネットワークの UI でデリゲートする手順のビデオガイド +- The risks of delegating tokens in The Graph Network +- How to calculate expected returns as a delegator +- A Video guide showing the steps to delegate in the Graph Network UI -## デリゲーションリスク +## Delegation Risks -以下に、本プロトコルでデリゲーターとなる場合の主なリスクを挙げます。 +Listed below are the main risks of being a delegator in the protocol. -### デリゲーション手数料 +### The delegation fee -デリゲートするたびに、0.5%の手数料が発生します。 つまり、1000GRT を委任する場合は、自動的に 5GRT が消費されます。 +It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. -つまり、安全のために、デリゲーターはインデクサーにデリゲートした場合のリターンを計算しておく必要があります。 例えば、デリゲーターは、自分のデリゲートに対する 0.5%のデポジット税を取り戻すのに何日かかるかを計算するとよいでしょう。 +This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. -### デリゲーションのアンボンディング期間 +### The delegation unbonding period -デリゲーターが、デリゲーションを解除しようとすると、そのトークンは 28 日間のアンボンディング期間が設けられます。 つまり、28 日間はトークンの譲渡や報酬の獲得ができません。 +Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. -考慮すべき点は、インデクサーを賢く選ぶことです。 信頼できない、あるいは良い仕事をしていないインデクサーを選んだ場合、アンデリゲートしたくなるでしょう。 つまり、報酬を獲得する機会を大幅に失うことになり、GRT をバーンするのと同じくらいの負担となります。 +One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT.
- デリゲーション UIの0.5%の手数料と、28日間のアンボンディング期間に注目してください。 + ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day + unbonding period._
-### デリゲーターに公平な報酬を支払う信頼できるインデクサーの選択 +### Choosing a trustworthy indexer with a fair reward payout for delegators -これは理解すべき重要な部分です。 まず、デリゲーションパラメータである 3 つの非常に重要な値について説明します。 +This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. -インデキシング報酬カット - インデキシング報酬カットは、インデクサーが自分のために保持する報酬の部分です。 つまり、これが 100%に設定されていると、デリゲーターであるあなたは 0 のインデキシング報酬を得ることになります。 UI に 80%と表示されている場合は、デリゲーターとして 20%を受け取ることになります。 重要な注意点として、ネットワークの初期段階では、インデキシング報酬が報酬の大半を占めます。 +Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards.
- トップのインデクサーは、デリゲーターに90%の報酬を与えています。 The + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The middle one is giving delegators 20%. The bottom one is giving delegators ~83%.*
-- クエリーフィーカット - これはインデキシングリワードカットと全く同じ働きをします。 しかし、これは特に、インデクサーが収集したクエリフィーに対するリターンを対象としています。 ネットワークの初期段階では、クエリフィーからのリターンは、インデキシング報酬に比べて非常に小さいことに注意する必要があります。 ネットワーク内のクエリフィーがいつから大きくなり始めるのかを判断するために、ネットワークに注意を払うことをお勧めします。 +- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. -このように、適切なインデクサーを選択するためには、多くのことを考えなければなりません。 だからこそ、The Graph の Discord をリサーチして、社会的評価や技術的評価が高く、デリゲーターに安定して報酬を与えることができるインデクサーが誰なのかを見極めることを強くお勧めします。 多くのインデクサーは Discord で活発に活動しており、あなたの質問に喜んで答えてくれるでしょう。 彼らの多くはテストネットで何ヶ月もインデックスを作成しており、ネットワークの健全性と成功を向上させるために、デリゲーターが良いリターンを得られるように最善を尽くしています。 +As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. -### デリゲーターの期待リターンを計算 +### Calculating delegators expected return -デリゲーターはリターンを決定する際に、多くの要素を考慮しなければなりません。 以下のとおりです: +A Delegator has to consider a lot of factors when determining the return. These -- デリゲーターは、インデクサーが利用可能なデリゲートトークンを使用する能力にも目を向けることができます。 もしインデクサーが利用可能なトークンをすべて割り当てていなければ、彼らは自分自身やデリゲーターのために得られる最大の利益を得られないことになります。 -- 現在のネットワークでは、インデクサーは 1 日から 28 日の間であればいつでも割り当てを終了して報酬を受け取ることができます。 そのため、インデクサーがまだ回収していない報酬をたくさん抱えている可能性があり、その結果、報酬の総額が少なくなっています。 これは初期の段階で考慮しておく必要があります。 +- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. -### クエリフィーのカットとインデックスフィーのカットの検討 +### Considering the query fee cut and indexing fee cut -上記のセクションで説明したように、問い合わせ手数料カットとインデクシングフィーのカット設定について透明性が高く、誠実なインデクサーを選ぶべきです。 デリゲーターは、Parameters Cooldown の時間を見て、どれだけの時間的余裕があるかを確認する必要があります。 その後、デリゲーターが得ている報酬の額を計算するのはとても簡単です。 その式は以下のとおりです: +As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: -![インデキシング リワードカット](/img/Delegation-Reward-Formula.png) +![Delegation Image 3](/img/Delegation-Reward-Formula.png) -### インデクサーのデリゲーションプールを考慮する +### Considering the indexers delegation pool -デリゲーターが考慮しなければならないもう一つのことは、デリゲーションプールのどの割合を所有しているかということです。 全てのデリゲーション報酬は均等に分配され、デリゲーターがプールに入金した金額によって決まるプールの簡単なリバランスが行われます。 これにより、デリゲーターはプールのシェアを得ることができます。 +Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: -![シェアの計算式](/img/Share-Forumla.png) +![Share formula](/img/Share-Forumla.png) -したがって、デリゲーターは計算して、デリゲーターに 20%を提供しているインデクサーの方が、より良いリターンを提供していると判断することができます。 +Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. -そのため、デリゲーターは、デリゲーターに20%を提供しているインデクサーの方が、より良いリターンを提供していると判断して計算することができます。 +A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. -### デリゲーション能力を考慮する +### Considering the delegation capacity -もうひとつ考慮しなければならないのが、デリゲーション能力です。 現在、デリゲーションレシオは 16 に設定されています。 これは、インデクサーが 1,000,000GRT をステークしている場合、そのデリゲーション容量はプロトコルで使用できる 16,000,000GRT のデリゲーショントークンであることを意味します。 この量を超えるデリゲートされたトークンは、全てのデリゲーター報酬を薄めてしまいます。 +Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. -あるインデクサーが 100,000,000 GRT をデリゲートされていて、その容量が 16,000,000 GRT しかないと想像してみてください。 これは事実上、84,000,000 GRT トークンがトークンの獲得に使われていないことを意味します。 そして、すべてのデリゲーターとインデクサーは、本来得られるはずの報酬よりもずっと少ない報酬しか得られていません。 +Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. -この式を使うと、デリゲーターに 20%しか提供していないインデクサーが、デリゲーターに 90%を提供しているインデクサーよりも、デリゲーターにさらに良い報酬を与えている可能性があることがわかります。 +Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. -## ネットワーク UI のビデオガイド +## Video guide for the network UI -この式を使うと、デリゲーターに 20%しか提供していないインデクサーが、デリゲーターに 90%を提供しているインデクサーよりも、デリゲーターにさらに良い報酬を与えている可能性があることがわかります。 +This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI.
From 7ce8353b1f4337811be71bdf0d606c40c0f171ea Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:45 -0500 Subject: [PATCH 105/241] New translations curating.mdx (Japanese) --- pages/ja/curating.mdx | 104 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/pages/ja/curating.mdx b/pages/ja/curating.mdx index d4e44811fbcf..2b526405fd98 100644 --- a/pages/ja/curating.mdx +++ b/pages/ja/curating.mdx @@ -2,102 +2,102 @@ title: キューレーティング --- -キュレーターは、グラフの分散型経済にとって重要な存在です。 キューレーターは、web3 のエコシステムに関する知識を用いて、The Graph Network がインデックスを付けるべきサブグラフを評価し、シグナルを送ります。 キュレーターは Explorer を通じてネットワークのデータを見て、シグナルを出す判断をすることができます。 The Graph Network は、良質なサブグラフにシグナルを送ったキュレーターに、サブグラフが生み出すクエリフィーのシェアを与えます。 キュレーターには、早期にシグナルを送るという経済的なインセンティブが働きます。 キュレーターからのシグナルはインデクサーにとって非常に重要で、インデクサーはシグナルを受けたサブグラフからデータを処理したり、インデックスを作成したりすることができます。 +Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. -シグナリングの際、キュレーターはサブグラフの特定のバージョンでシグナリングするか、auto-migrate を使ってシグナリングするかを決めることができます。 Auto-migrate を使ってシグナリングすると、キュレーターのシェアは常に開発者が公開した最新バージョンにアップグレードされます。 代わりに特定のバージョンでシグナルを送ることにした場合、シェアは常にその特定のバージョンのままとなります。 +When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. -キュレーションはリスクを伴うことを忘れないでください。 そして、信頼できるサブグラフでキュレーションを行うよう、十分に注意してください。 サブグラフの作成はパーミッションレスであり、人々はサブグラフを作成し、好きな名前をつけることができます。 キュレーションのリスクについての詳しいガイダンスは、 [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) をご覧ください。 +Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) -## ボンディングカーブ 101 +## Bonding Curve 101 -順を追ってみていきましょう。 まず、各サブグラフにはボンディングカーブがあり、ユーザーがその曲線(カーブ)にシグナルを加えると、キュレーション・シェアが形成されます。 各サブグラフのボンディングカーブはユニークです。 ボンディングカーブは、サブグラフ上でキュレーション・シェアをミントするための価格が、ミントされるシェアの数に応じて直線的に増加するように設計されています。 +First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. -![シェアあたりの価格](/img/price-per-share.png) +![Price per shares](/img/price-per-share.png) -その結果、価格は直線的に上昇し、時間の経過とともにシェアの購入価格が高くなることを意味しています。 下のボンディングカーブを見て、その例を示します: +As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: -![ボンディングカーブ](/img/bonding-curve.png) +![Bonding curve](/img/bonding-curve.png) -あるサブグラフのシェアを作成する 2 人のキュレーターがいるとします。 +Consider we have two curators that mint shares for a subgraph: -- キュレーター A は、サブグラフに最初にシグナルを送ります。 120,000GRT をボンディングカーブに加えることで、2000 もシェアをミントすることができます。 -- キュレーター B のシグナルは、後のある時点でサブグラフに表示されます。 キュレーター A と同じ量のシェアを受け取るためには、360,000GRT を曲線に加える必要があります。 -- 両方のキュレーターがキュレーションシェアの合計の半分を保有しているので、彼らは同額のキュレーターロイヤルティを受け取ることになります。 -- もし、キュレーターの誰かが 2000 のキュレーションシェアをバーンした場合、360,000GRT を受け取ることになります。 -- 残りのキュレーターは、そのサブグラフのキュレーター・ロイヤリティーをすべて受け取ることになります。 もし彼らが自分のシェアをバーンして GRT を引き出す場合、彼らは 120,000GRT を受け取ることになります。 -- **TLDR:** キュレーションシェアの GRT 評価はボンディングカーブによって決まるため、変動しやすいという傾向があります。 また、大きな損失を被る可能性があります。 早期にシグナリングするということは、1 つのシェアに対してより少ない GRT を投入することを意味します。 ひいては、同じサブグラフの後続のキュレーターよりも、GRT あたりのキュレーター・ロイヤリティーを多く得られることになります。 +- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. +- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. +- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. +- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. +- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. +- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. -一般的にボンディングカーブとは、トークンの供給量と資産価格の関係を定義する数学的な曲線のことです。 サブグラフのキュレーションという具体的なケースでは、サブグラフの各シェアの価格は、投資されたトークンごとに上昇し、販売されたトークンごとに減少します。 +In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** -The Graph の場合は、 [Bancor が実装しているボンディングカーブ式](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) を活用しています。 +In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. -## シグナルの出し方 +## How to Signal -ボンディングカーブの仕組みについて基本的なことを説明しましたが、ここではサブグラフにシグナルを送る方法を説明します。 グラフ・エクスプローラーの「キュレーター」タブ内で、キュレーターはネットワーク・スタッツに基づいて特定のサブグラフにシグナルを送ることができるようになります。 エクスプローラーでの操作方法の概要はこちらをご覧ください。 +Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) -キュレーターは、特定のサブグラフのバージョンでシグナルを出すことも、そのサブグラフの最新のプロダクションビルドに自動的にシグナルを移行させることも可能ですます。 どちらも有効な戦略であり、それぞれに長所と短所があります。 +A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. -特定のバージョンでのシグナリングは、1 つのサブグラフを複数の dapps が使用する場合に特に有効です。 ある DAP は、サブグラフを定期的に新機能で更新する必要があるかもしれません。 別のアプリは、古くても、よくテストされたサブグラフのバージョンを使用することを好むかもしれません。 初回キュレーション時には、1%の標準税が発生します。 +Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. -シグナルを最新のプロダクションビルドに自動的に移行させることは、クエリー料金の発生を確実にするために有効です。 キュレーションを行うたびに、1%のキュレーション税が発生します。 また、移行ごとに 0.5%のキュレーション税を支払うことになります。 つまり、サブグラフの開発者が、頻繁に新バージョンを公開することは推奨されません。 自動移行された全てのキュレーションシェアに対して、0.5%のキュレーション税を支払わなければならないからです。 +Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. -> 注:特定のサブグラフにシグナルを送る最初のアドレスは、最初のキュレーターとみなされ、後続のキュレーターよりもはるかに多くのガスを消費する仕事をしなければなりません。 最初のキュレーターは、キュレーションシェアのトークンを初期化し、ボンディングカーブを初期化し、トークンをグラフのプロキシに転送するからです。 +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. -## グラフネットワークにとってのシグナリングとは? +## What does Signaling mean for The Graph Network? -最終的な消費者がサブグラフをクエリできるようにするためには、まずサブグラフにインデックスを付ける必要があります。 インデックス化(インデクシング)とは、ファイルやデータ、メタデータを調べ、カタログ化し、結果をより早く見つけられるようにするための作業です。 サブグラフのデータを検索可能にするためには、データを整理する必要があります。 +For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. -そのため、インデクサーがどのサブグラフをインデックスすべきかを推測しなければならない場合、どのサブグラフが良質であるかを検証する方法がないため、しっかりとしたクエリフィーを得られる可能性は低くなります。 そこでキュレーションの出番です。 +And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. -キュレーターは The Graph ネットワークを効率化する存在であり、シグナリングとは、キュレーターがインデクサーにサブグラフのインデックスの作成に適していることを知らせるためのプロセスです。 シグナリングによりキュレータはサブグラフのキュレーションシェアを獲得し、サブグラフが駆動する将来のクエリフィーの一部を受け取る権利を得るため、インデクサーはキュレータからのシグナルを本質的に信頼することができます。 キュレーターのシグナルは、Graph Curation Shares (GCS) と呼ばれる ERC20 トークンで表されます。 より多くのクエリーフィーを獲得したいキュレーターは、ネットワークへの強いフィーの流れを生み出すと予測されるサブグラフに GRT をシグナルするべきであるといえます。 キュレーターはスラッシュされることはありませんが、ネットワークの整合性を損なう可能性のある不適切な意思決定を阻害するために、キュレーターにはデポジット税が課せられます。 また、キュレーターは、質の低いサブグラフでキュレーションを行うことを選択した場合、処理すべきクエリ数や、それらのクエリを処理するインデクサー数が少なくなるため、少ないクエリ手数料しか得られなくなります。 下の図をご覧ください。 +Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! -![シグナリング ダイアグラム](/img/curator-signaling.png) +![Signaling diagram](/img/curator-signaling.png) -インデクサーは、「グラフ・エクスプローラー」で確認したキュレーション・シグナルに基づいて、インデックスを作成するサブグラフを見つけることができます。 +Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). -![エクスプローラー サブグラフ](/img/explorer-subgraphs.png) +![Explorer subgraphs](/img/explorer-subgraphs.png) -## リスク +## Risks -1. The Graph では、クエリ市場は本質的に歴史が浅く、初期の市場ダイナミクスのために、あなたの%APY が予想より低くなるリスクがあります。 -2. キュレーション料 - キュレーターがサブグラフ上で GRT をシグナルすると、1%のキュレーション税が発生します。 この手数料はバーンされ、残りはボンディングカーブのリザーブサプライに預けられます。 -3. キュレーターが GRT を引き出すためにシェアをバーンすると、残りのシェアの GRT 評価額が下がります。 場合によっては、キュレーターが自分のシェアを一度にバーンすることを決めることがあるので注意が必要です。 このような状況は、dapp 開発者がサブグラフのバージョン管理や改良、クエリをやめた場合や、サブグラフが故障した場合によく見られます。 その結果、残ったキュレーターは当初の GRT の何分の一かしか引き出せないかもしれません。 リスクプロファイルの低いネットワークロールについては、\[Delegators\](https://thegraph.com/docs/delegating)を参照してください。 -4. サブグラフはバグで失敗することがあります。 失敗したサブグラフは、クエリフィーが発生しません。 結果的に、開発者がバグを修正して新しいバージョンを展開するまで待たなければならなくなります。 - - サブグラフの最新バージョンに加入している場合、シェアはその新バージョンに自動移行します。 これには0.5%のキュレーション税がかかります。 - - 特定のサブグラフのバージョンでシグナリングしていて、それが失敗した場合は、手動でキュレーションシャイアをバーンする必要があります。 キュレーション・カーブに最初に預けた金額よりも多く、または少なく GRT を受け取る可能性があることに注意してください。 これはキュレーターとしてのリスクです。 そして、新しいサブグラフのバージョンにシグナルを送ることができ、1%のキュレーション税が発生します。 +1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). +4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. -## キューレーション FAQ +## Curation FAQs -### 1. キュレータはクエリフィーの何%を獲得できますか? +### 1. What % of query fees do Curators earn? -サブグラフにシグナリングすることで、そのサブグラフが生成する、全てのクエリフィーのシェアを得ることができます。 全てのクエリーフィーの 10%は、キュレーターのキュレーションシェアに比例してキュレーターに支払われます。 この 10%はガバナンスの対象となります。 +By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. -### 2. シグナルを出すのに適した質の高いサブグラフはどのようにして決めるのですか? +### 2. How do I decide which subgraphs are high quality to signal on? -高品質のサブグラフを見つけるのは複雑な作業ですが、さまざまな方法でアプローチできます。 キュレーターとしては、クエリボリュームを牽引している信頼できるサブグラフを探したいと考えます。 信頼できるサブグラフは、それが完全で正確であり、Dap のデータニーズをサポートしていれば価値があるかもしれません。 アーキテクチャが不十分なサブグラフは、修正や再公開が必要になるかもしれませんし、失敗に終わることもあります。 キュレーターにとって、サブグラフが価値あるものかどうかを評価するために、サブグラフのアーキテクチャやコードをレビューすることは非常に重要です。 その結果として: +Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- キュレーターはネットワークの理解を利用して、個々のサブグラフが将来的にどのように高いまたは低いクエリボリュームを生成するかを予測することができます。 -- キュレーターは、グラフ・エクスプローラーで利用可能なメトリクスも理解する必要があります。 過去のクエリボリュームやサブグラフの開発者が誰であるかといったメトリクスは、サブグラフがシグナリングする価値があるかどうかを判断するのに役立ちます。 +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. -### 3. サブグラフのアップグレードにかかるコストは? +### 3. What’s the cost of upgrading a subgraph? -キュレーション株式を新しいサブグラフのバージョンに移行すると、1%のキュレーション税が発生します。 キュレーターは、サブグラフの最新バージョンへの登録を選択することができます。 キュレーターのシェアが新しいバージョンに自動移行されると、キュレーターはキュレーション税の半分、つまり0.5%を支払うことになります。これは、サブグラフのアップグレードがガスを消費するオンチェーンアクションであるためです。 +Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. -### 4. どのくらいの頻度でサブグラフをアップグレードできますか? +### 4. How often can I upgrade my subgraph? -サブグラフのアップグレードは、あまり頻繁に行わないことをお勧めします。 詳しくは上記の質問を参照してください。 +It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. -### 5. キュレーションのシェアを売却することはできますか? +### 5. Can I sell my curation shares? -キュレーションシェアは、他の ERC20 トークンのように「買う」ことも「売る」こともできません。 キュレーションシェアは、特定のサブグラフのボンディングカーブに沿って、ミント(作成)またはバーン(破棄)することしかできません。 新しいシグナルをミントするのに必要な GRT の量と、既存のシグナルをバーンしたときに受け取る GRT の量は、そのボンディングカーブによって決まります。 キュレーターとしては、GRT を引き出すためにキュレーションシェアをバーンすると、最初に預けた GRT よりも多くの GRT を手にすることもあれば、少なくなることもあることを把握しておく必要があります。 +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -まだ不明点がありますか? その他の不明点に関しては、 以下のキュレーションビデオガイドをご覧ください: +Still confused? Check out our Curation video guide below:
From 67a7899fb50096903f84f2302584183f64e85ad5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:46 -0500 Subject: [PATCH 106/241] New translations curating.mdx (Korean) --- pages/ko/curating.mdx | 104 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/pages/ko/curating.mdx b/pages/ko/curating.mdx index 456deec666f7..203e77b352cf 100644 --- a/pages/ko/curating.mdx +++ b/pages/ko/curating.mdx @@ -2,102 +2,102 @@ title: 큐레이팅 --- -큐레이터들은 더 그래프의 탈중앙화 경제에 매우 중요한 역할을 합니다. 이들은 웹3 생태계에 대한 지식을 활용하여 그래프 네트워크에 의해 색인화되어야 하는 서브그래프에 대한 평가와 신호를 수행합니다. 탐색기를 통해 큐레이터는 네트워크 데이터를 보고 신호 전달 결정을 내릴 수 있습니다. 더그래프 네트워크는 양질의 서브그래프에 신호를 보내는 큐레이터에게 서브그래프가 생성하는 쿼리 수수료에 대한 몫을 보상합니다. 큐레이터들은 이른 신호를 보내도록 경제적으로 장려된다. 큐레이터의 이러한 신호들은 신호되어진 서브그래프들로부터 데이터를 처리하거나 인덱싱 할 수 있는 인덱서들에게 중요합니다. +Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. -신호를 보낼 때 큐레이터는 서브그래프의 특정 버전에 신호를 보내거나 자동 마이그레이션을 사용하여 신호를 보내기로 결정할 수 있습니다. 자동 마이그레이션을 사용하여 신호를 보낼 때 큐레이터의 공유는 항상 개발자가 게시한 최신 버전으로 업그레이드됩니다. 만약, 여러분이 이를 대신하여 특정 버전에서 신호를 보내기로 결정하면 공유는 항상 이 특정 버전으로 유지됩니다. +When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. -큐레이션은 위험하다는 것을 기억하시길 바랍니다. 여러분들이 확실히 신뢰할 수 있는 서브그래프에 대한 큐레이션이 진행되도록 노력일 기울이시길 바랍니다. 서브그래프의 제작은 비허가형이기 때문에, 사람들은 서브그래프를 만들고 그들이 원하는 어떠한 이름으로도 명명할 수 있습니다. 큐레이션 위험에 대한 더 많은 가이드를 얻기 위해 [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/)를 확인하시길 바랍니다. +Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) -## 본딩 커브 101 +## Bonding Curve 101 -먼저 우리가 한발짝 물러나 보도록 하겠습니다. 각 서브그래프에는 유저가 시그날을 해당 커브**에** 추가할 때 큐레이션 쉐어가 발행되는 본딩 커브가 존재합니다. 각 서브그래프의 본딩 커브는 특별합니다. 본딩커브는 서브그래프에서 큐레이션 쉐어를 발행하는 가격이 발행된 쉐어 수에 걸쳐 선형적으로 증가하도록 설계되었습니다. +First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. -![シェアあたりの価格](/img/price-per-share.png) +![Price per shares](/img/price-per-share.png) -결과적으로 가격이 선형적으로 상승하므로 시간이 지남에 따라 쉐어를 구입하는 데 더 많은 비용이 소요됩니다. 여기 저희가 무엇을 의미하는지에 대한 예시가 있습니다. 아래의 본딩 커브를 보시죠. +As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: -![ボンディングカーブ](/img/bonding-curve.png) +![Bonding curve](/img/bonding-curve.png) -서브그래프에 대한 쉐어를 발행하는 큐레이터가 두 명 있다고 가정해 봅시다. +Consider we have two curators that mint shares for a subgraph: -- 큐레이터 A는 서브그래프에 신호를 보낸 첫 번째 사람입니다. 120,000 GRT를 커브에 추가함으로써, 그들은 2000개의 쉐어를 발행할 수 있습니다. -- 어느 시점 이후에 큐레이터 B의 신호가 서브그래프에 전달됩니다. 큐레이터 A와 동일한 양의 쉐어를 받기 위해서는 360,000 GRT를 커브에 추가해야 합니다. -- 두 큐레이터가 큐레이터 총 쉐어의 절반씩을 보유하고 있기 때문에 큐레이터 로열티는 똑같이 분배됩니다. -- 만약 큐레이터 중 누구든지 2000 큐레이션 쉐어를 소각할 경우 그들은 360,000 GRT를 받게 됩니다. -- 나머지 큐레이터는 이제 해당 서브그래프에 대한 모든 큐레이터 로열티를 받게 됩니다. 만약 그들이 GRT를 출금하기 위해 쉐어를 소각하는 경우 120,000 GRT를 받게 됩니다. -- **TLDR:** 해당 큐레이션 쉐어의 GRT 가치는 본딩 커브에 의해 결정되며 변동성이 있을 수 있습니다. 큰 손실을 입을 수 있는 가능성이 존재합니다. 이른 신호를 보낸다는 것은 여러분들이 각 쉐어를 위해 더 적은 GRT를 넣는다는 것을 의미합니다. 나아가서, 이는 동일한 서브그래프에 대해 이후 참여하는 큐레이터보다 GRT당 큐레이터 로열티를 더 많이 받는다는 의미이기도 합니다. +- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. +- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. +- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. +- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. +- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. +- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. -일반적으로, 본딩 커브는 토큰 공급과 자산 가격 사이의 관계를 정의하는 수학적 곡선입니다. 서브그래프 큐레이션의 특별한 경우에, **각 서브그래프 쉐어의 가격은 각 토큰이 투자될 때마다 증가합니다.** 그리고 **각 토큰 쉐어의 가격은 각 토큰이 판매될 때 마다 감소합니다.** +In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** -더그래프의 경우에는, [Bancor의 본딩 커브 공식 구현](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA)이 활용됩니다. +In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. -## 신호를 보내는 방법 +## How to Signal -이제 저희는 본딩 커브의 작동 방식에 대한 기본 내용을 알아보았는데요, 서브 그래프에서 신호를 보내는 방법은 다음과 같습니다. 더그래프 탐색기의 큐레이터 탭 내에서 큐레이터는 네트워크 통계를 기반으로 특정 서브그래프에 신호전달 혹은 신호해제를 할 수 있습니다. 탐색기에서 이 작업을 수행하는 방법에 대한 단계별 개요를 알아보기 위해, [이곳](https://thegraph.com/docs/explorer)을 클릭하시길 바랍니다. +Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) -또한 그들은 그 서브그래프의 최신 생산 빌드에 신호를 자동으로 이전하도록 선택할 수도 있습니다. 둘 다 유효한 전략이며 나름대로 장단점이 존재합니다. +A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. -특정 버전의 신호는 하나의 서브그래프가 여러 개의 dapp에 의해 사용될 때 특히 유용합니다. 하나의 dapp은 새로운 기능들과 함께 서브그래프를 정기적으로 업데이트해야 할 수도 있습니다. 다른 dapp에서는 테스트를 잘 거친 이전 서브그래프 버전을 사용하는 것을 선호할 수 있습니다. 최초 큐레이션 시, 1%의 표준 세금이 부과됩니다. +Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. -여러분들의 신호가 최신 프로덕션 빌드로 자동 이전 되도록 하는 것은 여러분들이 쿼리 수수료를 계속 발생시키는 데 유용할 수 있습니다. 여러분들이 매번 큐레이션을 할 때마다, 1퍼센트의 큐레이션 세금이 부과됩니다. 또한 여러분들은 매번의 마이그레이션 마다 0.5%의 큐레이션 세금을 지불해야합니다. 서브그래프 개발자는 새로운 버전을 자주 발행하는 것을 꺼려합니다. - 그들은 자동으로 마이그레이션된 모든 큐레이션 쉐어에 대해 0.5%의 큐레이션 세금을 내야 합니다. +Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. -> **참고**: 특정 서브그래프를 신호하는 첫 번째 주소는 첫 번째 큐레이터로 간주되며 첫 번째 큐레이터는 큐레이션 쉐어 토큰을 초기화하고, 본딩 커브를 초기화하며, 또한 토큰을 그래프 프록시로 전송하기 때문에 이어서 참여하는 다른 큐레이터들 보다 훨씬 더 많은 가스 집약적인 작업을 수행해야 합니다. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. -## 더그래프 네트워크에서 신호를 보내는 것은 무엇을 의미할까요? +## What does Signaling mean for The Graph Network? -최종 소비자가 서브그래프를 쿼리할 수 있으려면 먼저 서브그래프를 인덱싱해야 합니다. 인덱싱은 파일, 데이터 및 메타데이터를 보고, 카탈로그를 작성한 다음 인덱싱하여 원하는 결과를 더 빨리 찾을 수 있도록 하는 프로세스입니다. 서브그래프의 데이터가 검색 가능하게 하기 위해서, 데이터 구성이 필요합니다. +For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. -따라서, 만약 인덱서들이 어떤 서브그래프를 인덱싱해야 하는지 추측해야만 한다면, 어떤 서브그래프가 좋은지 검증할 방법이 없기 때문에 강력한 쿼리 비용을 얻을 가능성은 낮습니다. 큐레이션을 시작합니다. +And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. -큐레이터는 그래프 네트워크를 효율적으로 만들고, 시그널링은 큐레이터가 인덱서에 어떤 서브그래프가 인덱싱 하기에 좋다는 것을 알리기 위해 사용하는 프로세스입니다. 여기서 서브그래프를 위해 본딩 커브에 GRT가 추가됩니다. 인덱서들은 큐레이터의 신호를 본질적으로 신뢰할 수 있습니다. 그 이유는 신호를 보냄에 있어, 큐레이터가 발행하는 서브그래프의 큐레이션 쉐어는 해당 서브그래프가 향후 제공하게 될 쿼리 수수료에 대한 비율로서 적용되기 때문입니다. 큐레이션 신호는 GCS(Graph Curation Shares)라고 불리우는 ERC20 토큰으로 표현됩니다. 더 많은 쿼리 수수료를 얻고자 하는 큐레이터는 네트워크에 대한 수수료 흐름을 크게 발생시킬 것으로 예측되는 서브그래프에 GRT 신호를 보내야 합니다.큐레이터는 나쁜 행위로 인해 슬래싱 패널티를 받지는 않지만, 네트워크의 무결성을 해칠 수 있는 형편없는 의사결정에 대한 의욕을 꺾기 위해 큐레이터에게 부과되는 예치세가 존재합니다. 큐레이터는 만약에 그들이 낮은 품질의 서브그래프를 큐레이팅 하기로 선택할 경우, 처리 할 쿼리가 적거나, 이러한 쿼리를 처리할 인덱서들이 적기 때문에 더 낮은 쿼리 수수료를 취득하게 될 것입니다. 아래의 다이아그램을 보시죠! +Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! -![シグナリング ダイアグラム](/img/curator-signaling.png) +![Signaling diagram](/img/curator-signaling.png) -인덱서는 더그래프 탐색기(아래의 스크린샷 참조)에 표시되는 큐레이션 신호를 기반으로 인덱싱할 서브그래프를 찾을 수 있습니다. +Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). -![エクスプローラー サブグラフ](/img/explorer-subgraphs.png) +![Explorer subgraphs](/img/explorer-subgraphs.png) -## 위험요소 +## Risks -1. The Graph의 쿼리 시장은 본질적으로 젊고, 초기 시장의 변동성으로 인해 APY %가 예상보다 낮을 수 있습니다. -2. 큐레이션 수수료 - 큐레이터가 서브그래프상에 GRT 신호를 보낼 때, 그들은 1%의 큐래이션 세를 내야합니다. 이 수수료는 소각되며, 나머지는 본딩 커브의 예비 공급량에 예치됩니다. -3. 큐레이터들이 GRT를 출금하기 위해 그들의 쉐어를 소각할 경우, 잔존하는 쉐어들의 GRT 가치는 줄어들 것입니다. 어떤 경우에는 큐레이터들이 **한꺼번에** 쉐어를 소각하기로 결정할 수도 있다는 것을 주의하시길 바랍니다. 이러한 상황은 만약 dapp 개발자가 서브그래프의 버전/개선 및 쿼리를 중지하거나 어떠한 서브그래프가 실패할 경우 일반적으로 발생할 수 있습니다. 결과적으로, 잔존 큐레이터들은 아마 오직 그들의 초기 GRT의 일부만을 출금 가능할 수도 있습니다. 위험 프로필이 낮은 네트워크 역할을 위해, \[위임자\] (https://thegraph.com/docs/delegating)를 읽어보시기 바랍니다. -4. 어떤 서브그래프는 버그로 인해 실패할 수도 있습니다. 실패한 서브그래프에는 쿼리 수수료가 부과되지 않습니다. 따라서 개발자가 버그를 수정하고 새 버전을 배포할 때까지 기다려야 합니다. - - 만약 여러분들이 최신 버전의 서브그래프에 가입하신 경우에, 여러분들의 쉐어는 해당 신규 버전으로 자동 마이그레이션될 것입니다. 이는 0.5%의 큐레이션 세금이 부과될 것입니다. - - 만약 여러분이 특정 서브그래프 버전에 신호를 보냈지만 그것이 실패한다면, 여러분은 여러분의 큐레이션 쉐어를 수동으로 소각해야 할 것입니다. 큐레이션 커브에 처음 여러분들이 보관한 GRT보다 더 많거나 적은 GRT를 수령하실 수 있다는 것을 인지하시길 바랍니다. 이는 큐레이터 역할과 관련된 위험요소입니다. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). +4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. -## 큐레이션 FAQ +## Curation FAQs -### 1. 큐레이터들은 쿼리 수수료의 몇 %를 얻나요? +### 1. What % of query fees do Curators earn? -서브그래프에 신호를 보냄으로써, 여러분들은 이 서브그래프가 생성하는 모든 쿼리 수수료의 쉐어를 얻게 됩니다. 모든 쿼리 수수료의 10%는 각자의 큐레이터 쉐어에 비례하여 각 큐레이터들에게 분배됩니다. 이 10%는 거버넌스 대상입니다. +By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. -### 2. 어떤 서브그래프들이 신호를 보낼 고품질의 서브래프인지 어떻게 결정하나요? +### 2. How do I decide which subgraphs are high quality to signal on? -고품질 서브그래프를 찾는 것은 복잡한 작업이지만 다양한 방식의 접근이 가능합니다. 큐레이터로서, 여러분들은 쿼리 볼륨을 높이는 신뢰할 수 있는 서브그래프를 찾길 원하실 것입니다. 신뢰할 수 있는 서브그래프는 완전하고, 정확하며, dapp의 데이터 요구 사항들을 적절히 지원하는 경우 가치가 있을 것입니다. 잘못 구성된 서브그래프는 수정 혹은 다시 게시되어야 하지만, 결국에 실패할 수도 있습니다. 큐레이터는 어떠한 서브그래프가 가치가 있는지 평가하기 위해, 서브그래프의 아키텍처 또는 코드를 검토하는 것이 중요합니다. 결론적으로; +Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- 큐레이터는 네트워크에 대한 이해를 바탕으로 개별 서브그래프가 미래에 어떻게 더 높거나 더 낮은 쿼리 볼륨을 생성할 수 있는지 시도 및 예측을 해볼 수 있습니다. -- 큐레이터는 그래프 탐색기를 통해 사용할 수 있는 메트릭스 또한 이해해야 합니다. 과거 쿼리 볼륨 및 서브그래프 개발자 정보와 같은 메트릭스는 서브그래프가 신호를 보낼 가치가 있는지 여부를 결정하는 데 도움이 될 수 있습니다. +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. -### 3. 서브그래프의 업그레이드 비용은 얼마인가요? +### 3. What’s the cost of upgrading a subgraph? -여러분들의 큐레이션 쉐어를 새 서브그래프 버전으로 마이그레이션하시면, 1%의 큐레이션 세금이 발생합니다. 큐레이터는 서브그래프의 최신 버전을 구독하도록 선택할 수 있습니다. 큐레이터 쉐어가 새 버전으로 자동 마이그레이션 되면 큐레이터들은 큐레이션 세금의 절반 또한 지불합니다. 즉, 0.5%를 지불하게 되는데, 이는 서브그래프를 업그레이드하는 일은 가스를 소모하는 온체인 작업이기 때문입니다. +Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. -### 4. 저는 얼마나 자주 저의 서브그래프를 업그레이드 할 수 있나요? +### 4. How often can I upgrade my subgraph? -서브그래프를 너무 자주 업그레이드하지 않으시길 권장합니다. 자세한 내용은 위의 질문을 참조하시길 바랍니다. +It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. -### 5. 저는 저의 큐레이션 쉐어들을 판매할 수 있나요? +### 5. Can I sell my curation shares? -큐레이션 쉐어들은 아마 여러분들이 익숙하실 다른 ERC20 토큰들 처럼 "구매" 또는 "판매" 될 수 없습니다. 이는 오직 특정 서브그래프를 위한 본딩 커브에서 생성되고 소각될 수 있습니다. 새로운 신호를 만드는 데 필요한 GRT의 양과 기존 신호를 소각할 때 받는 GRT의 양은 해당 본딩 커브에 의해 결정됩니다. 큐레이터로서, 여러분들은 GRT를 인출하기 위해 큐레이션 쉐어를 소각할 때 처음에 예치한 것보다 많거나 적은 GRT를 수령할 수 있음을 인지하셔야 합니다. +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -아직도 혼란스러우신가요? 아래의 큐레이션 비디오 가이드를 확인해보시길 바랍니다. +Still confused? Check out our Curation video guide below:
From dbc8e355041a31b7c7364c2b6b083affefa3c2c3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:47 -0500 Subject: [PATCH 107/241] New translations curating.mdx (Chinese Simplified) --- pages/zh/curating.mdx | 96 +++++++++++++++++++++---------------------- 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/pages/zh/curating.mdx b/pages/zh/curating.mdx index 66ed9fe2bd2a..8faa88482bf7 100644 --- a/pages/zh/curating.mdx +++ b/pages/zh/curating.mdx @@ -2,96 +2,96 @@ title: 策展 --- -策展人对于 The Graph 去中心化的经济至关重要。 他们利用自己对 web3 生态系统的了解,对应该被 The Graph 网络索引的子图进行评估并发出信号。 通过资源管理器,策展人能够查看网络数据以做出信号决定。 The Graph 网络对那些在优质子图上发出信号的策展人给予奖励,并从子图产生的查询费中分得一部分。 在经济上,策展人被激励着尽早发出信号。 这些来自策展人的线索对索引人来说非常重要,他们可以对这些发出信号的子图进行处理或索引。 +Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. -在发出信号时,策展人可以决定在子图的一个特定版本上发出信号,或者使用自动迁移发出信号。 当使用自动迁移发出信号时,策展人的份额将始终升级到由开发商发布的最新版本。 如果你决定在一个特定的版本上发出信号,股份将始终保持在这个特定的版本上。 +When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. -Remember that curation is risky. 请做好你的工作,确保你在你信任的子图上进行策展。 请做好你的工作,确保你在你信任的子图上进行策展。 创建子图是没有权限的,所以人们可以创建子图,并称其为任何他们想要的名字。 关于策展风险的更多指导,请查看 [The Graph Academy 的策展指南。 ](https://thegraph.academy/curators/) +Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) -## 联合曲线 101 +## Bonding Curve 101 -首先,我们退一步讲。 每个子图都有一条粘合曲线,当用户在曲线上 **添加**信号时,策展份额就在这条曲线上被铸造出来。 每个子图的粘合曲线都是独一无二的。 粘合曲线的结构是这样的:在一个子图上铸造一个策展份额的价格随着铸造的份额数量而线性增加。 +First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. ![Price per shares](/img/price-per-share.png) -因此,价格是线性增长的,这意味着随着时间的推移,购买股票的成本会越来越高。 这里有一个例子说明我们的意思,请看下面的粘合曲线。 +As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: -![联合曲线](/img/bonding-curve.png) +![Bonding curve](/img/bonding-curve.png) -考虑到我们有两个策展人,他们为一个子图铸造了股份: +Consider we have two curators that mint shares for a subgraph: -- 策展人 A 是第一个对子图发出信号的人。 通过在曲线中加入 120,000 GRT,他们能够铸造出 2000 股。 -- 策展人 B 在之后的某个时间点在子图上发出信号。 为了获得与策展人 A 相同数量的股票,他们必须在曲线中加入 360,000 GRT。 -- 由于两位策展人都持有策展人股份总数的一半,他们将获得同等数量的策展人使用费。 -- 如果任何一个策展人现在烧掉他们的 2000 个策展份额,他们将获得 360,000 GRT。 -- 剩下的策展人现在将收到该子图的所有策展人使用费。 如果他们烧掉他们的股份来提取 GRT,他们将得到 12 万 GRT。 -- **TLDR:** 策展人股份的 GRT 估值是由粘合曲线决定的,可能会有波动。 有可能出现大的收益,也有可能出现大的损失。 提前发出信号意味着你为每只股票投入的 GRT 较少。 推而广之,这意味着在相同的子图上,你比后来的策展人在每个 GRT 上赚取更多的策展人使用费。 +- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. +- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. +- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. +- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. +- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. +- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. -一般来说,粘合曲线是一条数学曲线,定义了代币供应和资产价格之间的关系。 在子图策展的具体情况下,\*\*资产(子图份额)的价格随着每一个代币的投入而增加,资产的价格随着每一个代币的出售而减少。 +In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** -在 The Graph 的案例中, [Bancor 对粘合曲线公式的实施](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) 被利用。 +In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. -## 如何进行信号处理 +## How to Signal -现在我们已经介绍了关于粘合曲线如何工作的基本知识,这就是你将如何在子图上发出信号。 在 The Graph 资源管理器的策展人选项卡中,策展人将能够根据网络统计数据对某些子图发出信号和取消信号。 关于如何在资源管理器中做到这一点的一步步概述,请[点击这里。 ](https://thegraph.com/docs/explorer) +Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) -策展人可以选择在特定的子图版本上发出信号,或者他们可以选择让他们的策展份额自动迁移到该子图的最新生产版本。 这两种策略都是有效的,都有各自的优点和缺点。 +A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. -当一个子图被多个 dApp 使用时,在特定版本上发出信号特别有用。 一个 dApp 可能需要定期更新子图的新功能。 另一个 dApp 可能更喜欢使用旧的、经过良好测试的子图版本。 在初始策展时,会产生 1%的标准税。 +Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. -让你的策展份额自动迁移到最新的生产构建,对确保你不断累积查询费用是有价值的。 每次你策展时,都会产生 1%的策展税。 每次迁移时,你也将支付 0.5%的策展税。 不鼓励子图开发人员频繁发布新版本--他们必须为所有自动迁移的策展份额支付 0.5%的策展税。 +Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. -> **注意**: 第一个给特定子图发出信号的地址被认为是第一个策展人,将不得不消耗比之后其他策展人更多的燃料工作,因为第一个策展人初始化了策展份额代币,初始化了粘合曲线,还将代币转移到 Graph 代理。 +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. -## 信号对 The Graph 网络意味着什么? +## What does Signaling mean for The Graph Network? -为了让终端消费者能够查询一个子图,该子图必须首先被索引。 索引是一个过程,对文件、数据和元数据进行查看、编目,然后编制索引,这样可以更快地找到结果。 为了使子图的数据可以被搜索到,它需要被组织起来。 +For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. -因此,如果索引人不得不猜测他们应该索引哪些子图,那么他们赚取强大的查询费用的机会就会很低,因为他们没有办法验证哪些子图是高质量的。 进入策展阶段。 +And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. -策展人使 The Graph 网络变得高效,信号是策展人用来让索引人知道一个子图是好的索引的过程,其中 GRT 被存入子图的粘合曲线。 索引人可以从本质上信任策展人的信号,因为一旦发出信号,策展人就会为该子图铸造一个策展份额,使他们有权获得该子图所带来的部分未来查询费用。 策展人的信号以ERC20代币的形式表示,称为Graph Curation Shares(GCS)。 想赚取更多查询费的策展人应该向他们预测会给网络带来大量费用的子图发出他们的 GRT 信号。 策展人不能因为不良行为而被砍掉,但有一个对策展人的存款税,以抑制可能损害网络完整性的不良决策。 如果策展人选择在一个低质量的子图上进行策展,他们也会赚取较少的查询费,因为有较少的查询需要处理,或者有较少的索引人处理这些查询。 请看下面的图! +Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! ![Signaling diagram](/img/curator-signaling.png) -索引人可以根据他们在 The Graph 浏览器中看到的策展信号找到要索引的子图(下面的截图)。 +Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). ![Explorer subgraphs](/img/explorer-subgraphs.png) -## 风险 +## Risks -1. 在 The Graph,查询市场本来就很年轻,由于市场动态刚刚开始,你的年收益率可能低于你的预期,这是有风险的。 -2. 策展费 - 当策展人对子图发出 GRT 信号时,他们会产生 1%的策展税。 这笔费用被烧掉,剩下的被存入绑定曲线的储备供应中。 -3. 当策展人烧掉他们的股份以提取 GRT 时,剩余股份的 GRT 估值将被降低。 请注意,在某些情况下,策展人可能决定 **一次性**烧掉他们的股份。 这种情况可能很常见,如果一个 dApp 开发者停止版本/改进和查询他们的子图,或者如果一个子图失败。 因此,剩下的策展人可能只能提取他们最初 GRT 的一小部分。 关于风险较低的网络角色,请看委托人 \[Delegators\](https://thegraph.com/docs/delegating). -4. 一个子图可能由于错误而失败。 一个失败的子图不会累积查询费用。 因此,你必须等待,直到开发人员修复错误并部署一个新的版本。 - - 如果你订阅了一个子图的最新版本,你的股份将自动迁移到该新版本。 这将产生 0.5%的策展税。 - - 如果你已经在一个特定的子图版本上发出信号,但它失败了,你将不得不手动烧毁你的策展税。 请注意,你可能会收到比你最初存入策展曲线更多或更少的 GRT,这是作为策展人的相关风险。 然后你可以在新的子图版本上发出信号,从而产生1%的策展税。 +1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). +4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. -## 策展常见问题 +## Curation FAQs -### 1. 策展人能赚取多少百分比的查询费? +### 1. What % of query fees do Curators earn? -通过在一个子图上发出信号,你将获得这个子图产生的所有查询费用的份额。 所有查询费用的 10%将按策展人的策展份额比例分配给他们。 这 10%是受管理的。 +By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. -### 2. 我如何决定哪些子图是高质量的信号? +### 2. How do I decide which subgraphs are high quality to signal on? -寻找高质量的子图是一项复杂的任务,但它可以通过许多不同的方式来实现。 作为策展人,你要寻找那些推动查询量的值得信赖的子图。 这些值得信赖的子图是有价值的,因为它们是完整的,准确的,并支持 dApp 的数据需求。 一个架构不良的子图可能需要修改或重新发布,也可能最终失败。 策展人审查子图的架构或代码,以评估一个子图是否有价值,这是至关重要的。 因此: +Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- 策展人可以利用他们的市场知识,尝试预测单个子图在未来可能产生更多或更少的查询量 -- 策展人还应该了解通过 The Graph 浏览器提供的指标。 像过去的查询量和子图开发者是谁这样的指标可以帮助确定一个子图是否值得发出信号。 +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. -### 3. 升级一个子图的成本是多少? +### 3. What’s the cost of upgrading a subgraph? -将你的策展份额迁移到一个新的子图版本会产生 1%的策展税。 策展人可以选择订阅子图的最新版本。 当策展人质押被自动迁移到一个新的版本时,策展人也将支付一半的策展税,即 0.5%,因为升级子图是一个链上动作,需要花费交易费。 +Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. -### 4. 我多长时间可以升级我的子图? +### 4. How often can I upgrade my subgraph? -建议你不要太频繁地升级你的子图。 更多细节请见上面的问题。 +It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. -### 5. 我可以出售我的策展股份吗? +### 5. Can I sell my curation shares? -策展是一个开放的市场,任何人都可以购买(在这种情况下,"mint"),或出售("burn")特定子图的策划份额。 它们只能沿着特定子图的粘合曲线被铸造(创建)或烧毁(销毁)。 铸造新信号所需的 GRT 数量,以及当你烧毁现有信号时收到的 GRT 数量,是由该粘合曲线决定的。 作为一个策展人,你需要知道,当你燃烧你的策展份额来提取 GRT 时,你最终可能会得到比你最初存入的更多或更少的 GRT。 +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -还有困惑吗? 点击下面查看策展视频指导: +Still confused? Check out our Curation video guide below:
+>
From b0fbbdec09664e62b040392b643fdfaf8b086c89 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:49 -0500 Subject: [PATCH 109/241] New translations delegating.mdx (Spanish) --- pages/es/delegating.mdx | 84 ++++++++++++++++++++--------------------- 1 file changed, 42 insertions(+), 42 deletions(-) diff --git a/pages/es/delegating.mdx b/pages/es/delegating.mdx index 30e6905758a4..3c71bb2d7b41 100644 --- a/pages/es/delegating.mdx +++ b/pages/es/delegating.mdx @@ -2,92 +2,92 @@ title: delegación --- -Los delegadores no pueden ser penalizados por mal comportamiento, pero existe una tarifa inicial de depósitos que desalienta a los delegadores a tomar malas decisiones que puedan comprometer la integridad de la red. +Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. -## Guía del delegador +## Delegator Guide -Esta guía explicará cómo ser un delegador efectivo en Graph Network. Los delegadores comparten las ganancias del protocolo junto con todos los indexadores en base a participación delegada. Un delegador deberá usar su propio discernimiento para elegir los mejores indexadores, en base a una serie de factores. Tenga en cuenta que esta guía no expondrá los pasos necesarios para la configuración adecuada de Metamask, ya que esa información está expuesta en internet. Hay tres secciones en está guía: +This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: -- Los riesgos de delegar tokens en la red de The Graph -- Cómo calcular los rendimientos que te esperan siendo delegador -- Una guía visual (en vídeo) que muestra los pasos para delegar a través de la interfaz de usuario ofrecida por The Graph +- The risks of delegating tokens in The Graph Network +- How to calculate expected returns as a delegator +- A Video guide showing the steps to delegate in the Graph Network UI -## Riesgos al delegar +## Delegation Risks -A continuación se enumeran los principales riesgos de ser un delegador en el protocolo. +Listed below are the main risks of being a delegator in the protocol. -### La tarifa de delegación +### The delegation fee -Es importante comprender que cada vez que delegues, se te cobrará un 0,5%. Esto significa que si delegas 1000 GRT, automáticamente quemarás 5 GRT. +It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. -Esto significa que para estar seguro, un delegador debe calcular cuál será su retorno tras delegar a un Indexer. Por ejemplo, un delegador puede calcular cuántos días le tomará recuperar la tarifa inicial de depósito correspondiente al 0,5% de su delegación. +This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. -### Periodo de desvinculación (unstake) +### The delegation unbonding period -Siempre que un delegador quiera anular su participación en la red, sus tokens están sujetos a un período de desvinculación equivalente a 28 días. Esto significa que no podrá transferir sus tokens o ganar alguna recompensa durante los próximos 28 días. +Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. -Una cosa a considerar también, es elegir sabiamente al Indexador. Si eliges un Indexador que no es confiable, o que no está haciendo un buen trabajo, eso te impulsará a querer anular la delegación, lo que significa que perderás muchas oportunidades de obtener recompensas, la cual puede ser igual de mala que quemar GRT. +One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT.
- Ten en cuenta la tarifa del 0,5% en la interfaz de usuario para delegar, así como el período de desvinculación de 28 - días. + ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day + unbonding period._
-### Elige un indexador fiable, que pague recompensas justas a sus delegadores +### Choosing a trustworthy indexer with a fair reward payout for delegators -Está es una parte importante que debes comprender. Primero, analicemos tres valores muy importantes, los cuales son conocidos como Parámetros de Delegación. +This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. -Indexing Reward Cut: también conocido como el recorte de recompensas para el indexador, consiste en una porción de las recompensas generadas, las cuales se quedará el Indexer por el trabajo hecho. Eso significa que, si este valor se establece en 100%, no recibirás ninguna recompensa al ser delegador de este Indexer. Si ves el 80%, eso significa que como delegador, recibirás el 20% de dichas recompensas. Una nota importante: al comienzo de la red, las recompensas de indexación (Indexing Rewards) representará la mayoría de las recompensas. +Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards.
- El indexador de arriba, está dando a los delegadores el 90% de las recompensas generadas. El del medio está dando a - los delegadores un 20%. ...y finalmente, el de abajo está otorgando un ~83% a sus delegadores. + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The + middle one is giving delegators 20%. The bottom one is giving delegators ~83%.*
-- Query Fee Cut: esté funciona de igual forma que el Indexing Reward Cut. Sin embargo, esto funciona específicamente para un reembolso de las tarifas por cada consulta que cobrará el Indexador. Cabe resaltar que en los inicios de la red, los retornos de las tarifas por consulta serán muy pequeños en comparación con la recompensa de indexación. Se recomienda prestar atención a la red para determinar cuándo las tarifas por consulta dentro de la red, sean significativas. +- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. -Como puedes ver, hay que pensar mucho a la hora de elegir al indexador correcto. Es por eso que te recomendamos encarecidamente que eches un vistazo al Discord de The Graph, para determinar quiénes son los Indexadores con la mayor reputación social y técnica, que puedan lograr beneficiar a los delegadores de manera sostenible. Muchos de los Indexadores son muy activos en Discord y estarán encantados de responder a tus preguntas. Muchos de ellos han Indexado durante meses en la red de prueba y están haciendo todo lo posible para ayudar a los delegadores a obtener un buen rendimiento, ya que mejora la salud y el éxito de la red. +As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. -### Calculando el retorno esperado para los delegadores +### Calculating delegators expected return -Un delegador debe considerar muchos factores al determinar un retorno. Estos son expuestos a continuación: +A Delegator has to consider a lot of factors when determining the return. These -- Un delegador técnico también puede ver la capacidad de los Indexadores para usar los tokens que han sido delegados y la capacidad de disponibilidad a su favor. Si un in Indexador no está asignando todos los tokens disponibles, no está obteniendo el beneficio máximo que podría obtener para sí mismo o para sus delegadores. -- Por ahora, en la red, un Indexador puede optar por cerrar una asignación en cualquier momento y cobrar las recompensas dentro del primer día y el día 28. Por ende, es posible que un Indexador tenga muchas recompensas por recolectar y que por ello, sus recompensas totales sean bajas. Esto debe tenerse en cuenta durante los primeros días. +- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. -### Siempre tenga en cuenta la tarifa por consulta y el recorte de recompensas para el Indexador +### Considering the query fee cut and indexing fee cut -Como se describe en las secciones anteriores, debes elegir un Indexador que sea transparente y honesto sobre cómo gestiona el recorte de tarifas por consulta (Query Fee Cut) y sus recortes de tarifas por indexar (Indexing Fee Cuts). Un delegador también debe mirar el tiempo de enfriamiento establecidos para los parámetros (Parameters Cooldown), a fin de conocer cada cuánto tiempo puede cambiar sus parámetros. Una vez hecho esto, es bastante sencillo calcular la cantidad de recompensas que reciben los delegadores. La fórmula es: +As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: -![Recorte de recompensas de indexación](/img/Delegation-Reward-Formula.png) +![Delegation Image 3](/img/Delegation-Reward-Formula.png) -### Tener en cuenta el pool de delegación de cada Indexador +### Considering the indexers delegation pool -Otra cosa que un delegador debe considerar es la participación que tendrá dentro del pool de delegación (Delegation Pool). Todas las recompensas de la delegación se comparten de manera uniforme, con un simple reequilibrio del pool, el cual es basado en la participación depositada dentro del mismo. Esto le da al delegador una participación del pool: +Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: -![Fórmula compartida](/img/Share-Forumla.png) +![Share formula](/img/Share-Forumla.png) -Usando esta fórmula, podemos ver que en realidad es posible que un indexador que ofrece solo el 20% a los delegadores, en realidad les dé a sus delegadores una recompensa aún mejor que un indexador que les da el 90%. +Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. -Por lo tanto, un delegador puede hacer sus propios cálculos a fin de determinar que, el Indexador que ofrece un 20% a los delegadores ofrece un mejor rendimiento. +A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. -### Considerar la capacidad de delegación +### Considering the delegation capacity -Otro aspecto a considerar es la capacidad de delegación. Actualmente, el promedio de delegación (Delegation Ratio) se establece en 16. Esto significa que si un Indexador ha colocado en stake en total 1.000.000 de GRT, su capacidad de delegación será de 16.000.000 en tokens GRT, los cuales pueden usarse para delegar dentro del protocolo. Cualquier token delegado por encima de esta cantidad diluirá todas las recompensas que recibirán los delegadores. +Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. -Imagina que un Indexador tiene 100.000.000 GRT delegados y su capacidad es de solo 16.000.000 de GRT. Esto significa que, efectivamente, 84.000.000 tokens GRT no se están utilizando para ganar tokens. Y todos los delegadores e incluso el mismo Indexador, están ganando menos recompensas de lo que deberían estar ganando. +Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. -Utilizando está formula, podemos discernir qué un Indexer el cual está ofreciendo un rendimiento del 20% a sus delegados, puede estar ofreciendo un mejor rendimiento que aquél Indexador que ofrece un 90% a sus delegadores. +Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. -## Guía visual sobre la interfaz de la red +## Video guide for the network UI -Utilizando está formula, podemos discernir qué un Indexer el cual está ofreciendo un rendimiento del 20% a sus delegados, puede estar ofreciendo un mejor rendimiento que aquél Indexador que ofrece un 90% a sus delegadores. +This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI.
From 7883b0ac990cf51630cac5acd1814008dcd76d41 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:50 -0500 Subject: [PATCH 110/241] New translations delegating.mdx (Arabic) --- pages/ar/delegating.mdx | 76 +++++++++++++++++++++-------------------- 1 file changed, 39 insertions(+), 37 deletions(-) diff --git a/pages/ar/delegating.mdx b/pages/ar/delegating.mdx index 207be3e2a948..26a0e8a1415a 100644 --- a/pages/ar/delegating.mdx +++ b/pages/ar/delegating.mdx @@ -4,88 +4,90 @@ title: تفويض Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. -## دليل المفوض +## Delegator Guide This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: -- مخاطر تفويض التوكن tokens في شبكة The Graph -- كيفية حساب العوائد المتوقعة كمفوض -- فيديو يوضح خطوات التفويض في شبكة the Graph +- The risks of delegating tokens in The Graph Network +- How to calculate expected returns as a delegator +- A Video guide showing the steps to delegate in the Graph Network UI -## مخاطر التفويض Delegation +## Delegation Risks -القائمة أدناه هي المخاطر الرئيسية لكونك مفوضا في البروتوكول. +Listed below are the main risks of being a delegator in the protocol. -### رسوم التفويض +### The delegation fee -من المهم أن تفهم أنه في كل مرة تقوم فيها بالتفويض ، سيتم حرق 0.5٪. هذا يعني أنه إذا كنت تفوض 1000 GRT ، فستحرق 5 GRT تلقائيا. +It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. -هذا يعني أنه لكي يكون المفوض Delegator آمنا ، يجب أن يحسب عائده من خلال التفويض delegating للمفهرس. على سبيل المثال ، قد يحسب المفوض عدد الأيام التي سيستغرقها قبل أن يسترد ضريبة الإيداع ال 0.5٪ التي دفعها للتفويض. +This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. -### فترة إلغاء التفويض +### The delegation unbonding period -عندما يرغب أحد المفوضين في إلغاء التفويض ، تخضع التوكن الخاصة به إلى فترة 28 يوما وذلك لإلغاء التفويض. هذا يعني أنه لا يمكنهم تحويل التوكن الخاصة بهم ، أو كسب أي مكافآت لمدة 28 يوما. +Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. -يجب اختيار المفهرس بحكمة. إذا اخترت مفهرسا ليس جديرا بالثقة ، أو لا يقوم بعمل جيد ، فستحتاج إلى إلغاء التفويض ، مما يعني أنك ستفقد الكثير من الفرص لكسب المكافآت والتي يمكن أن تكون سيئة مثل حرق GRT. +One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT.
- لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض. + ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day + unbonding period._
-### اختيار مفهرس جدير بالثقة مع عائد جيد للمفوضين +### Choosing a trustworthy indexer with a fair reward payout for delegators -هذا جزء مهم عليك أن تفهمه. أولاً ، دعنا نناقش ثلاث قيم مهمة للغاية وهي بارامترات التفويض. +This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. -اقتطاع مكافأة الفهرسة Indexing Reward Cut - هو جزء من المكافآت التي سيحتفظ بها المفهرس لنفسه. هذا يعني أنه إذا تم تعيينه على 100٪ ، فستحصل كمفوض على 0 كمكافآت فهرسة. إذا رأيت 80٪ في واجهة المستخدم ، فهذا يعني أنك كمفوض ، ستتلقى 20٪. ملاحظة مهمة - في بداية الشبكة ، مكافآت الفهرسة تمثل غالبية المكافآت. +Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards.
- المفهرس الأعلى يمنح المفوضين 90٪ من المكافآت. والمتوسط يمنح المفوضين 20٪. الأدنى يعطي المفوضين ~ 83٪. + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The + middle one is giving delegators 20%. The bottom one is giving delegators ~83%.*
-- اقتطاع رسوم الاستعلام Query Fee Cut - هذا تماما مثل اقتطاع مكافأة الفهرسة Indexing Reward Cut. ومع ذلك ، فهو مخصص بشكل خاص للعائدات على رسوم الاستعلام التي يجمعها المفهرس. وتجدر الإشارة إلى أنه في بداية الشبكة ، سيكون العائد من رسوم الاستعلام صغيرا جدا مقارنة بمكافأة الفهرسة. من المستحسن الاهتمام بالشبكة لتحديد متى ستصبح رسوم الاستعلام في الشبكة أكثر أهمية. +- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. -كما ترى ، تحتاج للكثير من التفكير لاختيار المفهرس الصحيح. هذا السبب في أننا نوصي بشدة باستكشاف The Graph Discord لتحديد من هم المفهرسون الذين يتمتعون بأفضل سمعة اجتماعية وتقنية لمكافأة المفوضين على أساس ثابت. العديد من المفهرسين نشيطون جدا في Discord ، وسيسعدهم الرد على أسئلتك. يقوم العديد منهم بالفهرسة لعدة أشهر في testnet ، ويبذلون قصارى جهدهم لمساعدة المفوضين على كسب عائد جيد ، حيث يعمل ذلك على تحسين الشبكة ونجاحها. +As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. -### حساب العائد المتوقع للمفوضين delegators +### Calculating delegators expected return -يجب على المفوض النظر في الكثير من العوامل عند تحديد العائد. وهم +A Delegator has to consider a lot of factors when determining the return. These -- يمكن للمفوض إلقاء نظرة على قدرة المفهرسين على استخدام التوكن tokens المفوضة لهم. إذا لم يقم المفهرس بتخصيص جميع التوكن المتاحة ، فإنه لا يكسب أقصى ربح يمكن أن يحققه لنفسه أو للمفوضين. -- الآن في الشبكة ، يمكن للمفهرس اختيار إغلاق المخصصة وجمع المكافآت في أي وقت بين 1 و 28 يوما. لذلك من الممكن أن يكون لدى المفهرس الكثير من المكافآت التي لم يجمعها بعد ، وبالتالي ، فإن إجمالي مكافآته منخفضة. يجب أن يؤخذ هذا في الاعتبار في الأيام الأولى. +- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. -### النظر في اقتطاع رسوم الاستعلام query fee cut واقتطاع رسوم الفهرسة indexing fee cut +### Considering the query fee cut and indexing fee cut -كما هو موضح في الأقسام أعلاه ، يجب عليك اختيار مفهرس يتسم بالشفافية والصدق بشأن اقتطاع رسوم الاستعلام Query Fee Cut واقتطاع رسوم الفهرسة Indexing Fee Cuts. يجب على المفوض أيضا إلقاء نظرة على بارامتارات Cooldown time لمعرفة مقدار الوقت المتاح لديهم. بعد الانتهاء من ذلك ، من السهل إلى حد ما حساب مقدار المكافآت التي يحصل عليها المفوضون. الصيغة هي: +As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: -![قطع مكافأة الفهرسة Indexing Reward Cut](/img/Delegation-Reward-Formula.png) +![Delegation Image 3](/img/Delegation-Reward-Formula.png) -### النظر في أسهم تفويض المفهرسين +### Considering the indexers delegation pool -شيء آخر يجب على المفوضين مراعاته وهو نسبة أسهم التفويض Delegation Pool التي يمتلكونها. يتم تقاسم أسهم مكافآت التفويض بالتساوي ، مع إعادة موازنة بسيطة يتم تحديدها حسب المبلغ الذي أودعه المفوض. هذا يمنح المفوض حصة من الأسهم: +Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: -![شارك الصيغة](/img/Share-Forumla.png) +![Share formula](/img/Share-Forumla.png) Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. -لذلك يمكن للمفوض أن يقوم بالحسابات لتحديد أن المفهرس الذي يقدم 20٪ للمفوضين يقدم عائدا أفضل. +A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. -### النظر في سعة التفويض +### Considering the delegation capacity -شيء آخر للنظر هو سعة التفويض. حاليا نسبة التفويض تم تعيينه على 16. هذا يعني أنه إذا قام المفهرس بعمل staking ل 1،000،000 GRT ، فإن سعة التفويض الخاصة به هي 16،000،000 GRT من التوكن المفوضة التي يمكنهم استخدامها في البروتوكول. أي توكن مفوّضة تزيد عن هذا المبلغ ستخفف من جميع مكافآت المفوضين. +Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. -تخيل أن المفهرس لديه 100،000،000 GRT مفوضة ، وسعته هي فقط 16،000،000 GRT. هذا يعني أنه لا يتم استخدام 84.000.000 من توكنات GRT لكسب التوكنات. وجميع المفوضين والمفهرس يحصلون على مكافآت أقل مما يمكن أن يحصلوا عليه. +Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. -باستخدام هذه الصيغة ، يمكننا أن نرى أنه من الممكن فعليا للمفهرس الذي يقدم 20٪ فقط للمفوضين ، أن يمنح المفوضين مكافأة أفضل من المفهرس الذي يعطي 90٪ للمفوضين. +Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. -## فيديو لواجهة مستخدم الشبكة +## Video guide for the network UI -باستخدام هذه الصيغة ، يمكننا أن نرى أنه من الممكن فعليا للمفهرس الذي يقدم 20٪ فقط للمفوضين ، أن يمنح المفوضين مكافأة أفضل من المفهرس الذي يعطي 90٪ للمفوضين. +This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI.
From 19bfcf5a46671ddaf0cbefc489608849a82ebaf5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:51 -0500 Subject: [PATCH 111/241] New translations delegating.mdx (Korean) --- pages/ko/delegating.mdx | 73 +++++++++++++++++++++-------------------- 1 file changed, 37 insertions(+), 36 deletions(-) diff --git a/pages/ko/delegating.mdx b/pages/ko/delegating.mdx index 20cd2496fa04..49c14cd8e249 100644 --- a/pages/ko/delegating.mdx +++ b/pages/ko/delegating.mdx @@ -4,89 +4,90 @@ title: 위임하기 Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. -## 위임자 가이드 +## Delegator Guide This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: -- 더그래프 네트워크에 토큰을 위임할 때의 위험요소 -- 위임자로서의 예상 수익을 계산하는 방법 -- 더그래프 네트워크 UI에서 위임하는 절차를 보여주는 비디오 가이드 +- The risks of delegating tokens in The Graph Network +- How to calculate expected returns as a delegator +- A Video guide showing the steps to delegate in the Graph Network UI -## 위임 위험요소 +## Delegation Risks -아래의 리스트들은 프로토콜에서 위임자가 될 때의 주된 위험요소들입니다. +Listed below are the main risks of being a delegator in the protocol. -### 위임 수수료 +### The delegation fee -여러분들이 위임 행위를 할 때마다 0.5%의 요금이 부과된다는 점을 이해하는 것이 중요합니다. 이는 1000 GRT를 위임하는 경우 여러분들은 5 GRT를 자동적으로 소각하게 된다는 것을 뜻합니다. +It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. -즉, 안전을 위해서 위임자는 인덱서에 위임을 행함으로써 얻게될 수익을 계산해야 한다는 뜻입니다. 예를 들어, Delegator는 해당 위임에 대해 0.5%의 보증세를 다시 벌어들이기까지 며칠이 걸릴지 계산을 해야합니다. +This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. -### 위임 해지 기간 +### The delegation unbonding period -위임자가 위임의 해지를 원할 경우, 28일의 토큰 위임 해지 기간이 적용됩니다. 이는 그들이 28일 동안 토큰을 이전할 수 없고, 보상 또한 수령하지 못한다는 것을 의미합니다. +Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. -또한 고려해야 할 한 가지는 위임을 위한 인덱서를 현명하게 선택하는 것입니다. 만약 여러분들이 신뢰할 수 없거나 작업을 제대로 수행하지 않는 인덱서를 선택하면 여러분들은 해당 위임의 취소를 원할 것입니다. 이 경우, 보상을 받는 기회를 잃음과 더불어, 단지 여러분의 GRT를 소각하기만 한 결과를 초래할 것입니다. +One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT.
- 위임 UI에는 0.5%의 수수료 및 28일의 위임 해지 기간이 명시되어있습니다. + ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day + unbonding period._
-### 위임자들에 대한 공정한 보상 지급 규칙을 지닌 신뢰할 수 있는 인덱서 선택 +### Choosing a trustworthy indexer with a fair reward payout for delegators -이것은 이해해야 하는 중요한 부분입니다. 먼저 위임 매개 변수라는 세 가지 매우 중요한 값에 대해 살펴보겠습니다. +This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. -Indexing Reward Cut – Indexing Reward Cut은 인덱서가 스스로 가져갈 보상의 비율입니다. 즉, 100%로 설정된 경우 위임자에게 주어지는 인덱싱 보상이 0이 됩니다. 만약 UI에 80%로 표시되어 있다면, 이는 여러분은 위임자로서 20%를 받게 된다는 것을 의미합니다. 중요 참고 사항 - 네트워크 시작 부분의 인덱싱 보상이 보상의 대부분을 차지합니다. +Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards.
- 맨 위에 위치하는 인덱서는 위임자들에게 보상의 90%를 지급합니다. 가운데 있는 인덱서는 위임자들에게 보상의 20%를 - 지급합니다. 제일 하단의 인덱서는 위임자들에게 보상액의 83% 상당을 지급하는 인덱서입니다. + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The + middle one is giving delegators 20%. The bottom one is giving delegators ~83%.*
-- Query Fee Cut - 이는 Indexing Reward Cut과 동일하게 작동합니다. 그러나 이 값은 특별히 인덱서가 수집하는 쿼리 수수료의 반환에 사용됩니다. 네트워크 시작 시에 쿼리 수수료 수익은 인덱싱 보상에 비해 매우 적다는 점에 유의해야 합니다. 네트워크에서 쿼리 수수료가 더 중요해지기 시작할 시기를 결정하기 위해 네트워크에 주의를 기울이는 것이 좋습니다. +- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. -보시다시피 올바른 인덱서를 선택해야 하는 여러가지 고려사항이 존재합니다. 이러한 이유로 저희는 여러분들이 더그래프 디스코드 채널을 살펴보시고, 사회적 평판 및 기술적 평판을 잘 갖추고, 일관성을 기반으로 위임자들에게 보상을 지급하는 인덱서가 누구인지 확인하시기를 강력히 추천드립니다. 대부분의 인덱서는 디스코드에서 매우 활발히 활동중이며, 여러분들의 질문에 기꺼이 대답할 것입니다. 이들 중 다수는 테스트넷에서 몇 개월 동안 인덱싱 작업을 수행했으며, 네트워크의 건강과 성공을 향상시켜 위임자가 좋은 수익을 얻을 수 있도록 최선을 다하고 있습니다. +As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. -### 위임자들의 예상 수익 계산 +### Calculating delegators expected return -위임자는 수익을 결정할 때 수많은 요소를 고려해야 합니다. These +A Delegator has to consider a lot of factors when determining the return. These -- 기술적인 위임자들은 해당 인덱서가 그들에게 위임되어 사용 가능한 토큰을 올바르게 사용할 수 있는 능력을 갖추었는지를 볼 수 있습니다. 만약 인덱서들이 그들이 위임할 수 있는 모든 토큰을 할당하지 않는다면, 그들 자신 및 위임자들을 위한 최대 수익을 창출할 수 없습니다. -- 현재 네트워크에서 인덱서는 보상들을 수집하고, 할당을 닫는 기간을 1일에서 28일 사이의 기간으로 언제든지 선택할 수 있습니다 따라서 어떤 인덱서는 아직 수집하지 않은 보상이 많을 수도 있으며, 이로인해 그들의 총 보상이 낮을 수 있습니다. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. 이는 초기 며칠 동안에는 반드시 고려해야할 사항입니다. +- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. -### Query fee cut 및 Indexing fee cut에 대한 고려 +### Considering the query fee cut and indexing fee cut -위의 섹션에서 설명한 대로, 여러분들은 Query Fee Cut 및 Indexing Fee Cuts에 대해 투명하고 정직한 인덱서들을 선택해야합니다. 또한 위임자는 Parameters Cooldown 시간을 확인하여, 그들의 쿨다운 시간으로 인해 얼마나 많은 지연 시간이 존재하는지 확인해야합니다. 그렇게 한 후, 위임자들은 매우 쉽게 수령 리워드 총액을 계산 할 수 있습니다. 공식은 다음과 같습니다: +As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) -### 위임자들의 위임 풀에 대한 고려 +### Considering the indexers delegation pool -위임자들이 고려해야 할 또 다른 사항은 그들의 소유하고 있는 위임 풀의 비율입니다. 모든 위임 보상은 균등하게 공유되며, 단순하게 위임자가 풀에 입금한 양으로 풀의 균형을 재조정합니다. 다음과 같이 위임자에게 풀의 지분이 주어집니다. +Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: ![Share formula](/img/Share-Forumla.png) -따라서 위임자는 이러한 계산을 통해 위임자에게 20%를 제공하는 해당 인덱서가 더 나은 보상을 제공한다는 것을 결정할 수 있습니다. +Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. -### 위임 수용력에 대한 고려 +### Considering the delegation capacity -또 다른 고려사항은 위임 수용력입니다. 현재 위임 비율은 16으로 설정되어 있습니다. 만약 어떠한 인덱서가 1,000,000 GRT를 스테이킹 한 경우 프로토콜에서 그들이 사용할 수 있는 위임 토큰의 위임 수용력 수량은 16,000,000GRT입니다. 이 금액 이상의 위임된 토큰은 모든 위임자의 보상을 희석시킵니다. +Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. -만약 어떠한 인덱서에 위임된 GRT가 100,000,000개이고 수용력은 16,000,000 GRT에 불과하다고 가정해 보십시오. 이는 사실상 84,000,000개의 GRT 토큰이 실제로 토큰을 얻기 위해 사용되지 않고 있음을 의미합니다. 그리고 모든 위임자들과 인덱서는 실제 그들이 받을 수 있는 보상 보다 훨씬 적은 보상을 받고 있는 것입니다. +Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. -이 공식을 사용하여, 우리는 실제로 위임자에게 20%만 제공하는 인덱서가 실제로 위임자에게 90%를 주는 인덱서보다 훨씬 더 나은 보상을 제공하는 것이 가능하다는 것을 알 수 있습니다. +Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. -## 네트워크 UI를 위한 비디오 가이드 +## Video guide for the network UI -이 공식을 사용하여, 우리는 실제로 위임자에게 20%만 제공하는 인덱서가 실제로 위임자에게 90%를 주는 인덱서보다 훨씬 더 나은 보상을 제공하는 것이 가능하다는 것을 알 수 있습니다. +This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI.
From d2b8a5534521c73982ab63b1a6bf0d993834e29d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:52 -0500 Subject: [PATCH 112/241] New translations explorer.mdx (Vietnamese) --- pages/vi/explorer.mdx | 209 +++++++++++++++++++++--------------------- 1 file changed, 104 insertions(+), 105 deletions(-) diff --git a/pages/vi/explorer.mdx b/pages/vi/explorer.mdx index f66163c2def8..c8df28cfe03f 100644 --- a/pages/vi/explorer.mdx +++ b/pages/vi/explorer.mdx @@ -2,211 +2,210 @@ title: The Graph Explorer --- -Chào mừng bạn đến với Graph Explorer, hay như chúng tôi thường gọi, cổng thông tin phi tập trung của bạn vào thế giới subgraphs và dữ liệu mạng. 👩🏽‍🚀 Graph Explorer bao gồm nhiều phần để bạn có thể tương tác với các nhà phát triển subgraph khác, nhà phát triển dapp, Curators, Indexers, và Delegators. Để biết tổng quan chung về Graph Explorer, hãy xem video bên dưới (hoặc tiếp tục đọc bên dưới): +Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below):
+>
## Subgraphs -Điều đầu tiên, nếu bạn vừa hoàn thành việc triển khai và xuất bản subgraph của mình trong Subgraph Studio, thì tab Subgraphs ở trên cùng của thanh điều hướng là nơi để xem các subgraph đã hoàn thành của riêng bạn (và các subgraph của những người khác) trên mạng phi tập trung. Tại đây, bạn sẽ có thể tìm thấy chính xác subgraph mà bạn đang tìm kiếm dựa trên ngày tạo, lượng tín hiệu hoặc tên. +First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. -![Explorer Image 1 -](/img/Subgraphs-Explorer-Landing.png) +![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Khi bạn nhấp vào một subgraph, bạn sẽ có thể thử các truy vấn trong playground và có thể tận dụng chi tiết mạng để đưa ra quyết định sáng suốt. Bạn cũng sẽ có thể báo hiệu GRT trên subgraph của riêng bạn hoặc các subgraph của người khác để làm cho các indexer nhận thức được tầm quan trọng và chất lượng của nó. Điều này rất quan trọng vì việc báo hiệu trên một subgraph khuyến khích nó được lập chỉ mục, có nghĩa là nó sẽ xuất hiện trên mạng để cuối cùng phục vụ các truy vấn. +When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -Trên trang chuyên dụng của mỗi subgraph, một số chi tiết được hiển thị. Bao gồm: +On each subgraph’s dedicated page, several details are surfaced. These include: -- Báo hiệu / Hủy báo hiệu trên subgraph -- Xem thêm chi tiết như biểu đồ, ID triển khai hiện tại và siêu dữ liệu khác -- Chuyển đổi giữa các phiên bản để khám phá các lần bản trước đây của subgraph -- Truy vấn subgraph qua GraphQL -- Thử subgraph trong playground -- Xem các Indexers đang lập chỉ mục trên một subgraph nhất định -- Thống kê Subgraph (phân bổ, Curators, v.v.) -- Xem pháp nhân đã xuất bản subgraph +- Signal/Un-signal on subgraphs +- View more details such as charts, current deployment ID, and other metadata +- Switch versions to explore past iterations of the subgraph +- Query subgraphs via GraphQL +- Test subgraphs in the playground +- View the Indexers that are indexing on a certain subgraph +- Subgraph stats (allocations, Curators, etc) +- View the entity who published the subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## Những người tham gia +## Participants -Trong tab này, bạn sẽ có được cái nhìn tổng thể về tất cả những người đang tham gia vào các hoạt động mạng, chẳng hạn như Indexers, Delegators, và Curators. Dưới đây, chúng tôi sẽ đi vào đánh giá sâu về ý nghĩa của mỗi tab đối với bạn. +Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Hãy bắt đầu với Indexers (Người lập chỉ mục). Các Indexers là xương sống của giao thức, là những người đóng góp vào các subgraph, lập chỉ mục chúng và phục vụ các truy vấn cho bất kỳ ai sử dụng subgraph. Trong bảng Indexers, bạn sẽ có thể thấy các thông số ủy quyền của Indexer, lượng stake của họ, số lượng họ đã stake cho mỗi subgraph và doanh thu mà họ đã kiếm được từ phí truy vấn và phần thưởng indexing. Đi sâu hơn: +Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: -- Phần Cắt Phí Truy vấn - là % hoàn phí truy vấn mà Indexer giữ lại khi ăn chia với Delegators -- Phần Cắt Thưởng Hiệu quả - phần thưởng indexing được áp dụng cho nhóm ủy quyền (delegation pool). Nếu là âm, điều đó có nghĩa là Indexer đang cho đi một phần phần thưởng của họ. Nếu là dương, điều đó có nghĩa là Indexer đang giữ lại một số phần thưởng của họ -- Cooldown Remaining (Thời gian chờ còn lại) - thời gian còn lại cho đến khi Indexer có thể thay đổi các thông số ủy quyền ở trên. Thời gian chờ Cooldown được Indexers thiết lập khi họ cập nhật thông số ủy quyền của mình -- Được sở hữu - Đây là tiền stake Indexer đã nạp vào, có thể bị phạt cắt giảm (slashed) nếu có hành vi độc hại hoặc không chính xác -- Được ủy quyền - Lượng stake từ các Delegator có thể được Indexer phân bổ, nhưng không thể bị phạt cắt giảm -- Được phân bổ - phần stake mà Indexers đang tích cực phân bổ cho các subgraph mà họ đang lập chỉ mục -- Năng lực Ủy quyền khả dụng - số token stake được ủy quyền mà Indexers vẫn có thể nhận được trước khi họ trở nên ủy quyền quá mức (overdelegated) -- Max Delegation Capacity (Năng lực Ủy quyền Tối đa) - số tiền token stake được ủy quyền tối đa mà Indexer có thể chấp nhận một cách hiệu quả. Số tiền stake được ủy quyền vượt quá con số này sẽ không thể được sử dụng để phân bổ hoặc tính toán phần thưởng. -- Phí Truy vấn - đây là tổng số phí mà người dùng cuối đã trả cho các truy vấn từ Indexer đến hiện tại -- Thưởng Indexer - đây là tổng phần thưởng indexer mà Indexer và các Delegator của họ kiếm được cho đến hiện tại. Phần thưởng Indexer được trả thông qua việc phát hành GRT. +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated +- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -Indexers có thể kiếm được cả phí truy vấn và phần thưởng indexing. Về mặt chức năng, điều này xảy ra khi những người tham gia mạng ủy quyền GRT cho Indexer. Điều này cho phép Indexers nhận phí truy vấn và phần thưởng tùy thuộc vào thông số Indexer của họ. Các thông số Indexing được cài đặt bằng cách nhấp vào phía bên phải của bảng hoặc bằng cách truy cập hồ sơ của Indexer và nhấp vào nút “Ủy quyền”. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. -Để tìm hiểu thêm về cách trở thành một Indexer, bạn có thể xem qua [tài liệu chính thức](/indexing) hoặc [Hướng dẫn về Indexer của Học viện The Graph.](https://thegraph.academy/delegators/choosing-indexers/) +To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) ![Indexing details pane](/img/Indexing-Details-Pane.png) ### 2. Curators -Curators (Người Giám tuyển) phân tích các subgraph để xác định subgraph nào có chất lượng cao nhất. Một khi Curator tìm thấy một subgraph có khả năng hấp dẫn, họ có thể curate nó bằng cách báo hiệu trên đường cong liên kết (bonding curve) của nó. Khi làm như vậy, Curator sẽ cho Indexer biết những subgraph nào có chất lượng cao và nên được lập chỉ mục. +Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. -Curators có thể là các thành viên cộng đồng, người tiêu dùng dữ liệu hoặc thậm chí là nhà phát triển subgraph, những người báo hiệu trên subgraph của chính họ bằng cách nạp token GRT vào một đường cong liên kết. Bằng cách nạp GRT, Curator đúc ra cổ phần curation của một subgraph. Kết quả là, Curators có đủ điều kiện để kiếm một phần phí truy vấn mà subgraph mà họ đã báo hiệu tạo ra. Đường cong liên kết khuyến khích Curators quản lý các nguồn dữ liệu chất lượng cao nhất. Bảng Curator trong phần này sẽ cho phép bạn xem: +Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: -- Ngày Curator bắt đầu curate -- Số GRT đã được nạp -- Số cổ phần một Curator sở hữu +- The date the Curator started curating +- The number of GRT that was deposited +- The number of shares a Curator owns ![Explorer Image 6](/img/Curation-Overview.png) -Nếu muốn tìm hiểu thêm về vai trò Curator, bạn có thể thực hiện việc này bằng cách truy cập các liên kết sau của [Học viện The Graph](https://thegraph.academy/curators/) hoặc [tài liệu chính thức.](/curating) +If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) ### 3. Delegators -Delegators (Người Ủy quyền) đóng một vai trò quan trọng trong việc duy trì tính bảo mật và phân quyền của Mạng The Graph. Họ tham gia vào mạng bằng cách ủy quyền (tức là "staking") token GRT cho một hoặc nhiều indexer. Không có những Delegator, các Indexer ít có khả năng kiếm được phần thưởng và phí đáng kể. Do đó, Indexer tìm cách thu hút Delegator bằng cách cung cấp cho họ một phần của phần thưởng lập chỉ mục và phí truy vấn mà họ kiếm được. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. -Delegator, đổi lại, chọn Indexer dựa trên một số biến số khác nhau, chẳng hạn như hiệu suất trong quá khứ, tỷ lệ phần thưởng lập chỉ mục và phần cắt phí truy vấn. Danh tiếng trong cộng đồng cũng có thể đóng vai trò quan trọng trong việc này! Bạn nên kết nối với những các indexer đã chọn qua[Discord của The Graph](https://thegraph.com/discord) hoặc [Forum The Graph](https://forum.thegraph.com/)! +Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -Bảng Delegators sẽ cho phép bạn xem các Delegator đang hoạt động trong cộng đồng, cũng như các chỉ số như: +The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: -- Số lượng Indexers mà một Delegator đang ủy quyền cho -- Ủy quyền ban đầu của Delegator -- Phần thưởng họ đã tích lũy nhưng chưa rút khỏi giao thức -- Phần thưởng đã ghi nhận ra mà họ rút khỏi giao thức -- Tổng lượng GRT mà họ hiện có trong giao thức -- Ngày họ ủy quyền lần cuối cùng +- The number of Indexers a Delegator is delegating towards +- A Delegator’s original delegation +- The rewards they have accumulated but have not withdrawn from the protocol +- The realized rewards they withdrew from the protocol +- Total amount of GRT they have currently in the protocol +- The date they last delegated at -Nếu bạn muốn tìm hiểu thêm về cách trở thành một Delegator, đừng tìm đâu xa! Tất cả những gì bạn phải làm là đi đến [tài liệu chính thức](/delegating) hoặc [Học viện The Graph](https://docs.thegraph.academy/network/delegators). +If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). -## Mạng lưới +## Network -Trong phần Mạng lưới, bạn sẽ thấy các KPI toàn cầu cũng như khả năng chuyển sang cơ sở từng epoch và phân tích các chỉ số mạng chi tiết hơn. Những chi tiết này sẽ cho bạn biết mạng hoạt động như thế nào theo thời gian. +In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### Hoạt động +### Activity -Phần hoạt động có tất cả các chỉ số mạng hiện tại cũng như một số chỉ số tích lũy theo thời gian. Ở đây bạn có thể thấy những thứ như: +The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: -- Tổng stake mạng hiện tại -- Phần chia stake giữa Indexer và các Delegator của họ -- Tổng cung GRT, lượng được đúc và đốt kể từ khi mạng lưới thành lập -- Tổng phần thưởng Indexing kể từ khi bắt đầu giao thức -- Các thông số giao thức như phần thưởng curation, tỷ lệ lạm phát,... -- Phần thưởng và phí của epoch hiện tại +- The current total network stake +- The stake split between the Indexers and their Delegators +- Total supply, minted, and burned GRT since the network inception +- Total Indexing rewards since the inception of the protocol +- Protocol parameters such as curation reward, inflation rate, and more +- Current epoch rewards and fees -Một vài chi tiết quan trọng đáng được đề cập: +A few key details that are worth mentioning: -- **Phí truy vấn đại diện cho phí do người tiêu dùng tạo ra**, và chúng có thể được Indexer yêu cầu (hoặc không) sau một khoảng thời gian ít nhất 7 epochs (xem bên dưới) sau khi việc phân bổ của họ cho các subgraph đã được đóng lại và dữ liệu mà chúng cung cấp đã được người tiêu dùng xác thực. -- **Phần thưởng Indexing đại diện cho số phần thưởng mà Indexer đã yêu cầu được từ việc phát hành mạng trong epoch đó.** Mặc dù việc phát hành giao thức đã được cố định, nhưng phần thưởng chỉ nhận được sau khi Indexer đóng phân bổ của họ cho các subgraph mà họ đã lập chỉ mục. Do đó, số lượng phần thưởng theo từng epoch khác nhau (nghĩa là trong một số epoch, Indexer có thể đã đóng chung các phân bổ đã mở trong nhiều ngày). +- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) ### Epochs -Trong phần Epochs, bạn có thể phân tích trên cơ sở từng epoch, các chỉ số như: +In the Epochs section you can analyse on a per-epoch basis, metrics such as: -- Khối bắt đầu hoặc kết thúc của Epoch -- Phí truy vấn được tạo và phần thưởng indexing được thu thập trong một epoch cụ thể -- Trạng thái Epoch, đề cập đến việc thu và phân phối phí truy vấn và có thể có các trạng thái khác nhau: - - Epoch đang hoạt động là epoch mà Indexer hiện đang phân bổ cổ phần và thu phí truy vấn - - Epoch đang giải quyết là những epoch mà các kênh trạng thái đang được giải quyết. Điều này có nghĩa là Indexers có thể bị phạt cắt giảm nếu người tiêu dùng công khai tranh chấp chống lại họ. - - Epoch đang phân phối là epoch trong đó các kênh trạng thái cho các epoch đang được giải quyết và Indexer có thể yêu cầu hoàn phí truy vấn của họ. - - Epoch được hoàn tất là những epoch không còn khoản hoàn phí truy vấn nào để Indexer yêu cầu, do đó sẽ được hoàn thiện. +- Epoch start or end block +- Query fees generated and indexing rewards collected during a specific epoch +- Epoch status, which refers to the query fee collection and distribution and can have different states: + - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees + - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. + - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. ![Explorer Image 9](/img/Epoch-Stats.png) -## Hồ sơ Người dùng của bạn +## Your User Profile -Nãy giờ chúng ta đã nói về các thống kê mạng, hãy chuyển sang hồ sơ cá nhân của bạn. Hồ sơ người dùng cá nhân của bạn là nơi để bạn xem hoạt động mạng của mình, bất kể bạn đang tham gia mạng như thế nào. Ví Ethereum của bạn sẽ hoạt động như hồ sơ người dùng của bạn và với Trang Tổng quan Người dùng, bạn sẽ có thể thấy: +Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: -### Tổng quan Hồ sơ +### Profile Overview -Đây là nơi bạn có thể xem bất kỳ hành động hiện tại nào bạn đã thực hiện. Đây cũng là nơi bạn có thể tìm thấy thông tin hồ sơ, mô tả và trang web của mình (nếu bạn đã thêm). +This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) -### Tab Subgraphs +### Subgraphs Tab -Nếu bạn nhấp vào tab Subgraphs, bạn sẽ thấy các subgraph đã xuất bản của mình. Điều này sẽ không bao gồm bất kỳ subgraph nào được triển khai với CLI cho mục đích thử nghiệm - các subgraph sẽ chỉ hiển thị khi chúng được xuất bản lên mạng phi tập trung. +If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) -### Tab Indexing +### Indexing Tab -Nếu bạn nhấp vào tab Indexing, bạn sẽ tìm thấy một bảng với tất cả các phân bổ hiện hoạt và lịch sử cho các subgraph, cũng như các biểu đồ mà bạn có thể phân tích và xem hiệu suất trước đây của mình với tư cách là Indexer. +If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. -Phần này cũng sẽ bao gồm thông tin chi tiết về phần thưởng Indexer ròng của bạn và phí truy vấn ròng. Bạn sẽ thấy các số liệu sau: +This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: -- Stake được ủy quyền - phần stake từ Delegator có thể được bạn phân bổ nhưng không thể bị phạt cắt giảm (slashed) -- Tổng Phí Truy vấn - tổng phí mà người dùng đã trả cho các truy vấn do bạn phục vụ theo thời gian -- Phần thưởng Indexer - tổng số phần thưởng Indexer bạn đã nhận được, tính bằng GRT -- Phần Cắt Phí - lượng % hoàn phí phí truy vấn mà bạn sẽ giữ lại khi ăn chia với Delegator -- Phần Cắt Thưởng - lượng % phần thưởng Indexer mà bạn sẽ giữ lại khi ăn chia với Delegator -- Được sở hữu - số stake đã nạp của bạn, có thể bị phạt cắt giảm (slashed) vì hành vi độc hại hoặc không chính xác +- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed +- Total Query Fees - the total fees that users have paid for queries served by you over time +- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT +- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators +- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators +- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior ![Explorer Image 12](/img/Indexer-Stats.png) -### Tab Delegating +### Delegating Tab -Delegator rất quan trọng đối với Mạng The Graph. Một Delegator phải sử dụng kiến thức của họ để chọn một Indexer sẽ mang lại lợi nhuận lành mạnh từ các phần thưởng. Tại đây, bạn có thể tìm thấy thông tin chi tiết về các ủy quyền đang hoạt động và trong lịch sử của mình, cùng với các chỉ số của Indexer mà bạn đã ủy quyền. +Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. -Trong nửa đầu của trang, bạn có thể thấy biểu đồ ủy quyền của mình, cũng như biểu đồ chỉ có phần thưởng. Ở bên trái, bạn có thể thấy các KPI phản ánh các chỉ số ủy quyền hiện tại của bạn. +In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. -Các chỉ số Delegator mà bạn sẽ thấy ở đây trong tab này bao gồm: +The Delegator metrics you’ll see here in this tab include: -- Tổng pphần thưởng ủy quyền -- Tổng số phần thưởng chưa ghi nhận -- Tổng số phần thưởng đã ghi được +- Total delegation rewards +- Total unrealized rewards +- Total realized rewards -Trong nửa sau của trang, bạn có bảng ủy quyền. Tại đây, bạn có thể thấy các Indexer mà bạn đã ủy quyền, cũng như thông tin chi tiết của chúng (chẳng hạn như phần cắt thưởng, thời gian chờ, v.v.). +In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). -Với các nút ở bên phải của bảng, bạn có thể quản lý ủy quyền của mình - ủy quyền nhiều hơn, hủy bỏ hoặc rút lại ủy quyền của bạn sau khoảng thời gian rã đông (thawing period). +With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. -Lưu ý rằng biểu đồ này có thể cuộn theo chiều ngang, vì vậy nếu bạn cuộn hết cỡ sang bên phải, bạn cũng có thể thấy trạng thái ủy quyền của mình (ủy quyền, hủy ủy quyền, có thể rút lại). +Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). ![Explorer Image 13](/img/Delegation-Stats.png) -### Tab Curating +### Curating Tab -Trong tab Curation, bạn sẽ tìm thấy tất cả các subgraph mà bạn đang báo hiệu (do đó cho phép bạn nhận phí truy vấn). Báo hiệu cho phép Curator đánh dấu cho Indexer biết những subgraph nào có giá trị và đáng tin cậy, do đó báo hiệu rằng chúng cần được lập chỉ mục. +In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. -Trong tab này, bạn sẽ tìm thấy tổng quan về: +Within this tab, you’ll find an overview of: -- Tất cả các subgraph bạn đang quản lý với các chi tiết về tín hiệu -- Tổng cổ phần trên mỗi subgraph -- Phần thưởng truy vấn cho mỗi subgraph -- Chi tiết ngày được cập nhật +- All the subgraphs you're curating on with signal details +- Share totals per subgraph +- Query rewards per subgraph +- Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) -## Cài đặt Hồ sơ của bạn +## Your Profile Settings -Trong hồ sơ người dùng của mình, bạn sẽ có thể quản lý chi tiết hồ sơ cá nhân của mình (như thiết lập tên ENS). Nếu bạn là Indexer, bạn thậm chí có nhiều quyền truy cập hơn vào các cài đặt trong tầm tay của mình. Trong hồ sơ người dùng của mình, bạn sẽ có thể thiết lập các tham số ủy quyền và operator của mình. +Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. -- Operators (Người vận hành) thực hiện các hành động được hạn chế trong giao thức thay mặt cho Indexer, chẳng hạn như mở và đóng phân bổ. Operators thường là các địa chỉ Ethereum khác, tách biệt với ví đặt staking của họ, với quyền truy cập được kiểm soát vào mạng mà Indexer có thể cài đặt cá nhân -- Tham số ủy quyền cho phép bạn kiểm soát việc phân phối GRT giữa bạn và các Delegator của bạn. +- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set +- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. ![Explorer Image 15](/img/Profile-Settings.png) -Là cổng thông tin chính thức của bạn vào thế giới dữ liệu phi tập trung, Graph Explorer cho phép bạn thực hiện nhiều hành động khác nhau, bất kể vai trò của bạn trong mạng. Bạn có thể truy cập cài đặt hồ sơ của mình bằng cách mở menu thả xuống bên cạnh địa chỉ của bạn, sau đó nhấp vào nút Cài đặt. +As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.
![Wallet details](/img/Wallet-Details.png)
From c4f1bbc737f7241ade0b6f374593db6572c41262 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:53 -0500 Subject: [PATCH 113/241] New translations delegating.mdx (Chinese Simplified) --- pages/zh/delegating.mdx | 76 +++++++++++++++++++++-------------------- 1 file changed, 39 insertions(+), 37 deletions(-) diff --git a/pages/zh/delegating.mdx b/pages/zh/delegating.mdx index 8ba0e39c9035..217c80e3f9ff 100644 --- a/pages/zh/delegating.mdx +++ b/pages/zh/delegating.mdx @@ -2,84 +2,86 @@ title: 委托 --- -委托人不能因为不良行为而被取消,但对委托有存款税,以抑制可能损害网络完整性的不良决策。 +Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. -## 委托人指南 +## Delegator Guide -本指南将解释如何在Graph网络中成为一个有效的委托人。 委托人与所有索引人一起分享其委托股权的协议收益。 委托人必须根据多种因素,运用他们的最佳判断力来选择索引人。 请注意,本指南将不涉及正确设置Metamask等步骤,因为这些信息在互联网上广泛存在。 本指南有三个部分: +This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: -- 在 The Graph 网络中委托代币的风险 -- 如何计算作为委托人的预期回报 -- 展示在 The Graph 网络界面中进行委托步骤的视频指南 +- The risks of delegating tokens in The Graph Network +- How to calculate expected returns as a delegator +- A Video guide showing the steps to delegate in the Graph Network UI -## 委托风险 +## Delegation Risks -下面列出了作为议定书中的委托人的主要风险。 +Listed below are the main risks of being a delegator in the protocol. -### 委托费用 +### The delegation fee -重要的是要了解每次委托时,您将被收取 0.5% 的费用。 这意味着如果您委托 1000 GRT,您将自动销毁 5 GRT。 +It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. -这意味着为了安全起见,委托人应该通过委托给索引人来计算他们的回报。 例如,委托人可能会计算他们需要多少天才能收回其委托的 0.5% 存款税。 +This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. -### 委托解约期 +### The delegation unbonding period -每当委托人想要解除委托时,他们的代币都有 28 天的解除绑定期。 这意味着他们在 28 天内不能转移他们的代币,也不能获得任何奖励。 +Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. -还需要考虑的一件事是明智地选择索引人。 如果您选择了一个不值得信赖的 索引人,或者没有做好工作,您将想要取消委托,这意味着您将失去很多获得奖励的机会,这可能与燃烧 GRT 一样糟糕。 +One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT.
- 请注意委托用户界面中的0.5%费用,以及28天的解约期。 + ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day + unbonding period._
-### 选择一个为委托人提供公平的奖励分配的值得信赖的索引人 +### Choosing a trustworthy indexer with a fair reward payout for delegators -这是需要理解的重要部分。 首先让我们讨论三个非常重要的值,即委托参数。 +This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. -索引奖励分成- 索引奖励分成是指索引人将为自己保留的那部分奖励。 这意味着,如果它被设置为 100%,作为一个委托人,你将获得 0 个索引奖励。 如果你在 UI 中看到 80%,这意味着作为委托人,你将获得 20%。 一个重要的说明 -在网络的初期,索引奖励将占奖励的大部分比重。 +Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards.
- 面的索引人分给委托人 90% 的收益。 中间的给委托人 20%。 下面的给委托人约 83%。 + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The + middle one is giving delegators 20%. The bottom one is giving delegators ~83%.*
-- 查询费分成-这与索引奖励分成的运作方式完全相同。 不过,这是专门针对索引人收取的查询费的回报。 需要注意的是,在网络初期,查询费的回报与索引奖励相比会非常小。 建议关注网络来确定网络中的查询费何时开始变的比较可观。 +- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. -正如您所看到的,在选择合适的索引人时必须要考虑很多。 这就是为什么我们强烈建议您探索 The Graph Discord,以确定哪些是具有最佳社会声誉和技术声誉的索引人,并以持续的方式奖励委托人。 许多索引人在 Discord 中非常活跃,他们将很乐意回答您的问题。 他们中的许多人已经在测试网中做了几个月的索引人,并且正在尽最大努力帮助委托人们赚取良好的回报,因为如此可以增进网络的健康运行和成功。 +As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. -### 计算委托人的预期收益 +### Calculating delegators expected return -委托人在确定收益时必须考虑很多因素。 这些因素解释如下 : +A Delegator has to consider a lot of factors when determining the return. These -- 有技术的委托人还可以查看索引人使用他们可用的委托代币的能力。 如果索引人没有分配所有可用的代币,他们就不会为自己或他们的委托人赚取最大利润。 -- 现在在网络中,索引人可以选择关闭分配并在 1 到 28 天之间的任何时间收集奖励。 因此,索引人可能有很多尚未收集的奖励,因此他们的总奖励很低。 早期应该考虑到这一点。 +- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. -### 考虑到查询费用的分成和索引费用的分成 +### Considering the query fee cut and indexing fee cut -如上文所述,你应该选择一个在设置他们的查询费分成和索引奖励分成方面透明和诚实的索引人。 委托人还应该看一下参数冷却时间,看看他们有多少时间缓冲区。 做完这些之后,计算委托人会获得的奖励金额就相当简单了。 计算公式是: +As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) -### 考虑索引人委托池 +### Considering the indexers delegation pool -委托人必须考虑的另一件事是他们拥有的委托池的比例。 所有的委托奖励都是平均分配的,根据委托人存入池子的数额来决定池子的简单再平衡。 这使委托人就拥有了委托池的份额: +Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: ![Share formula](/img/Share-Forumla.png) -因此,委托人可以进行数学计算,以确定向委托人提供 20% 的索引人提供了更好的回报。 +Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. -因此,委托人可以进行数学计算,以确定向委托人提供 20% 的 索引人提供了更好的回报。 +A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. -### 考虑委托容量 +### Considering the delegation capacity -另一个需要考虑的是委托容量。 目前,委托比例被设置为 16。 这意味着,如果一个索引人质押了 1,000,000 GRT,他们的委托容量是 16,000,000 GRT 的委托令牌,他们可以在协议中使用。 任何超过这个数量的委托令牌将稀释所有的委托人奖励。 +Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. -想象一下,一个索引人有 100,000,000 GRT 委托给他们,而他们的能力只有 16,000,000 GRT。 这意味着实际上,84,000,000 GRT 令牌没有被用来赚取令牌。 而所有的委托人,以及索引人,赚取的奖励也远远低于他们可以赚取的。 +Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. -使用这个公式,我们可以看到实际上只向委托人提供 20%的索引人比给索引人提供 90%的索引人实际上给予委托人更好的奖励。 +Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. -## 网络界面视频指南 +## Video guide for the network UI -使用这个公式,我们可以看到实际上只向委托人提供 20%的索引人比给索引人提供 90%的索引人实际上给予委托人更好的奖励。 +This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI.
+>
From 4724abde79cfba4f2d546efff8afd1e782c4c36b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:55 -0500 Subject: [PATCH 115/241] New translations explorer.mdx (Spanish) --- pages/es/explorer.mdx | 244 +++++++++++++++++++++--------------------- 1 file changed, 122 insertions(+), 122 deletions(-) diff --git a/pages/es/explorer.mdx b/pages/es/explorer.mdx index 6ede1f9592e3..c8df28cfe03f 100644 --- a/pages/es/explorer.mdx +++ b/pages/es/explorer.mdx @@ -2,210 +2,210 @@ title: The Graph Explorer --- -Bienvenido al explorador de The Graph, o como nos gusta llamarlo, tu portal descentralizado al mundo de los subgrafos y los datos de la red. 👩🏽‍🚀 Este explorador de The Graph consta de varias partes en las que puedes interactuar con otros desarrolladores de subgrafos, desarrolladores de dApp, Curadores, Indexadores y Delegadores. Para obtener una descripción general de The Graph Explorer, échale un vistazo al siguiente video (o sigue leyendo lo que hemos escrito para ti): +Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below):
-## Subgrafos +## Subgraphs -Vamos primero por lo más importante, si acabas de terminar de implementar y publicar tu subgrafo en el Subgraph Studio, la pestaña Subgrafos en la parte superior de la barra de navegación es el lugar para ver tus propios subgrafos terminados (y los subgrafos de otros) en la red descentralizada. Aquí podrás encontrar el subgrafo exacto que estás buscando según la fecha de creación, el monto de señalización o el nombre que le han asignado. +First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. -![Imagen de Explorer 1](/img/Subgraphs-Explorer-Landing.png) +![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Cuando hagas clic en un subgrafo, podrás probar consultas en el playground y podrás aprovechar los detalles de la red para tomar decisiones informadas. También podrás señalar GRT en tu propio subgrafo o en los subgrafos de otros para que los indexadores sean conscientes de su importancia y calidad. Esto es fundamental porque señalar en un subgrafo incentiva su indexación, lo que significa que saldrá a la luz en la red para eventualmente entregar consultas. +When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. -![Imagen de Explorer 2](/img/Subgraph-Details.png) +![Explorer Image 2](/img/Subgraph-Details.png) -En la página de cada subgrafo, aparecen varios detalles. Entre ellos se incluyen: +On each subgraph’s dedicated page, several details are surfaced. These include: -- Señalar/dejar de señalar un subgrafo -- Ver más detalles como gráficos, ID de implementación actual y otros metadatos -- Cambiar de versión para explorar iteraciones pasadas del subgrafo -- Consultar subgrafos a través de GraphQL -- Probar subgrafos en el playground -- Ver los Indexadores que están indexando en un subgrafo determinado -- Estadísticas de subgrafo (asignaciones, Curadores, etc.) -- Ver la entidad que publicó el subgrafo +- Signal/Un-signal on subgraphs +- View more details such as charts, current deployment ID, and other metadata +- Switch versions to explore past iterations of the subgraph +- Query subgraphs via GraphQL +- Test subgraphs in the playground +- View the Indexers that are indexing on a certain subgraph +- Subgraph stats (allocations, Curators, etc) +- View the entity who published the subgraph -![Imagen de Explorer 3](/img/Explorer-Signal-Unsignal.png) +![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## Participantes +## Participants -Dentro de esta pestaña, obtendrás una vista panorámica de todas las personas que participan en las actividades de la red, como Indexadores, Delegadores y Curadores. A continuación, analizaremos en profundidad lo que significa cada pestaña para ti. +Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. -### 1. Indexadores +### 1. Indexers -![Imagen de Explorer 4](/img/Indexer-Pane.png) +![Explorer Image 4](/img/Indexer-Pane.png) -Comencemos con los Indexadores. Los Indexadores son la columna vertebral del protocolo, ya que son los que stakean en los subgrafos, los indexan y proveen consultas a cualquiera que consuma subgrafos. En la tabla de Indexadores, podrás ver los parámetros de delegación de un Indexador, su participación, cuánto han stakeado en cada subgrafo y cuántos ingresos han obtenido por las tarifas de consulta y las recompensas de indexación. Profundizaremos un poco más a continuación: +Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: -- Query Fee Cut: es el porcentaje de los reembolsos obtenidos por la tarifa de consulta que el Indexador conserva cuando se divide con los Delegadores -- Effective Reward Cut: es el recorte de recompensas por indexación que se aplica al pool de delegación. Si es negativo, significa que el Indexador está regalando parte de sus beneficios. Si es positivo, significa que el Indexador se queda con alguno de tus beneficios -- Cooldown Remaining: el tiempo restante que le permitirá al Indexador cambiar los parámetros de delegación. Los plazos de configuración son ajustados por los Indexadores cuando ellos actualizan sus parámetros de delegación -- Owned: esta es la participación (o el stake) depositado por el Indexador, la cual puede reducirse por su mal comportamiento -- Delegated: participación de los Delegadores que puede ser asignada por el Indexador, pero que no se puede recortar -- Allocated: es el stake que los indexadores están asignando activamente a los subgrafos que están indexando -- Available Delegation Capacity: la cantidad de participación delegada que los indexadores aún pueden recibir antes de que se sobredeleguen -- Max Delegation Capacity: la cantidad máxima de participación delegada que el Indexador puede aceptar de manera productiva. Cuando se excede parte del stake en la delegación, estos no contarán para las asignaciones o recompensas. -- Query Fees: estas son las tarifas totales que los usuarios (clientes) han pagado por todas las consultas de un Indexador -- Indexer Rewards: este es el total de recompensas del Indexador obtenidas por el Indexador y sus Delegadores durante todo el tiempo que trabajaron en conjunto. Las recompensas de los Indexadores se pagan mediante la emisión de GRT. +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated +- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -Los Indexadores pueden ganar tanto tarifas de consulta como recompensas de indexación. Funcionalmente, esto sucede cuando los participantes de la red delegan GRT a un Indexador. Esto permite a los Indexadores recibir tarifas de consulta y recompensas en función de sus parámetros como Indexador. Los parámetros de Indexación se establecen haciendo clic en el lado derecho de la tabla o entrando en el perfil de un Indexador y haciendo clic en el botón "Delegate". +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. -Para obtener más información sobre cómo convertirse en Indexador, puedes consultar la [documentación oficial](/indexing) o \[Guías del Indexador de The Graph Academy.\](https://thegraph.academy/delegators/choosing- indexers/) +To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) -![Panel de detalles de indexación](/img/Indexing-Details-Pane.png) +![Indexing details pane](/img/Indexing-Details-Pane.png) -### 2. Curadores +### 2. Curators -Los Curadores analizan los subgrafos para identificar qué subgrafos son de la más alta calidad. Una vez que un Curador ha encontrado un subgrafo potencialmente atractivo, puede curarlo señalándolo en su curva de vinculación. Al hacerlo, los Curadores informan a los Indexadores qué subgrafos son de alta calidad y necesitan ser indexados. +Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. -Los Curadores pueden ser miembros de la comunidad, consumidores de datos o incluso desarrolladores de subgrafos que señalan en sus propios subgrafos depositando tokens GRT en una curva de vinculación. Al depositar GRT, los Curadores anclan sus participaciones como curadores de un subgrafo. Como resultado, los Curadores son elegibles para ganar una parte de las tarifas de consulta que genera el subgrafo que han señalado. La curva de vinculación incentiva a los Curadores a curar fuentes de datos de la más alta calidad. La tabla de Curador en esta sección te permitirá ver: +Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: -- La fecha en que el Curador comenzó a curar -- El número de GRT que se depositaron -- El número de participaciones que posee un Curador +- The date the Curator started curating +- The number of GRT that was deposited +- The number of shares a Curator owns -![Imagen de Explorer 6](/img/Curation-Overview.png) +![Explorer Image 6](/img/Curation-Overview.png) -Si deseas obtener más información sobre la función de un Curador, puedes hacerlo visitando los siguientes enlaces de [The Graph Academy](https://thegraph.academy/curators/) o [ documentación oficial.](/curating) +If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) -### 3. Delegadores +### 3. Delegators -Los Delegadores juegan un rol esencial en la seguridad y descentralización que conforman la red de The Graph. Participan en la red delegando (es decir, "stakeadon") tokens GRT a uno o varios Indexadores. Sin Delegadores, es menos probable que los Indexadores obtengan recompensas y tarifas significativas. Por lo tanto, los Indexadores buscan atraer Delegadores ofreciéndoles una parte de las recompensas de indexación y las tarifas de consulta que ganan. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. -Los Delegadores, a su vez, seleccionan a los Indexadores en función de una serie de diferentes parámetros, como el rendimiento que tenía ese indexador, las tasas de recompensa por indexación y los recortes compartidos de las tarifas de consulta. ¡La reputación dentro de la comunidad también puede influir en esto! Se recomienda conectarse con los Indexadores seleccionados a través del [Discord de The Graph](https://thegraph.com/discord) o el [¡Foro de The Graph](https://forum.thegraph.com/)! +Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! -![Imagen de Explorer 7](/img/Delegation-Overview.png) +![Explorer Image 7](/img/Delegation-Overview.png) -La tabla de Delegadores te permitirá ver los Delegadores activos en la comunidad, así como las siguientes métricas: +The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: -- El número de Indexadores a los que delega este Delegador -- La delegación principal de un Delegador -- Las recompensas que han ido acumulando, pero que aún no han retirado del protocolo -- Las recompensas realizadas, es decir, las que ya retiraron del protocolo -- Cantidad total de GRT que tienen actualmente dentro del protocolo -- La fecha en la que delegaron por última vez +- The number of Indexers a Delegator is delegating towards +- A Delegator’s original delegation +- The rewards they have accumulated but have not withdrawn from the protocol +- The realized rewards they withdrew from the protocol +- Total amount of GRT they have currently in the protocol +- The date they last delegated at -Si deseas obtener más información sobre cómo convertirte en un Delegador, ¡No busques más! Todo lo que tienes que hacer es dirigirte a la [documentación oficial](/delegating) o [The Graph Academy](https://docs.thegraph.academy/network/delegators). +If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). -## Red (network) +## Network -En la sección Network (red), verás los KPI globales, así como la capacidad de cambiar a una base por ciclo y analizar las métricas de la red con más detalle. Estos detalles te darán una idea de cómo se está desempeñando la red a lo largo del tiempo. +In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### Actividad (activity) +### Activity -La sección actividad tiene todas las métricas de red actuales, así como algunas métricas acumulativas a lo largo del tiempo. Aquí puedes ver cosas como: +The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: -- La cantidad total de stake que circula en estos momentos -- La participación que se divide entre los Indexadores y sus Delegadores -- Suministro total, GRT anclados y quemados desde el comienzo de la red -- Recompensas totales de Indexación desde el comienzo del protocolo -- Parámetros del protocolo como las recompensas de curación, tasa de inflación y más -- Recompensas y tarifas del ciclo actual +- The current total network stake +- The stake split between the Indexers and their Delegators +- Total supply, minted, and burned GRT since the network inception +- Total Indexing rewards since the inception of the protocol +- Protocol parameters such as curation reward, inflation rate, and more +- Current epoch rewards and fees -Algunos detalles clave que vale la pena mencionar: +A few key details that are worth mentioning: -- **Las tarifas de consulta representan las tarifas generadas por los consumidores**, y que pueden ser reclamadas (o no) por los Indexadores después de un período de al menos 7 ciclos (ver más abajo) después de que se han cerrado las asignaciones hacia los subgrafos y los datos que servían han sido validados por los consumidores. -- **Las recompensas de indexación representan la cantidad de recompensas que los Indexadores reclamaron por la emisión de la red durante el ciclo.** Aunque la emisión del protocolo es fija, las recompensas solo se anclan una vez que los Indexadores cierran sus asignaciones hacia los subgrafos que han indexado. Por lo tanto, el número de recompensas por ciclo suele variar (es decir, durante algunos ciclos, es posible que los Indexadores hayan cerrado colectivamente asignaciones que han estado abiertas durante muchos días). +- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). -![Imagen de Explorer 8](/img/Network-Stats.png) +![Explorer Image 8](/img/Network-Stats.png) -### Ciclos (epoch) +### Epochs -En la sección de ciclos puedes analizar diferentes métricas por cada ciclo, tales como: +In the Epochs section you can analyse on a per-epoch basis, metrics such as: -- Inicio de ciclo o bloque final -- Tarifas de consulta generadas y recompensas de indexación recolectadas durante un ciclo específico -- Estado del ciclo, el cual se refiere al cobro y distribución de la tarifa de consulta y puede tener diferentes estados: - - El ciclo activo es aquel en la que los indexadores actualmente asignan su participación (staking) y cobran tarifas por consultas - - Los ciclos liquidados son aquellos en los que ya se han liquidado las recompensas y demás métricas. Esto significa que los Indexadores están sujetos a recortes si los consumidores abren disputas en su contra. - - Los ciclos de distribución son los ciclos en los que los canales correspondiente a los ciclos son establecidos y los Indexadores pueden reclamar sus reembolsos correspondientes a las tarifas de consulta. - - Los ciclos finalizados son los ciclos que no tienen reembolsos en cuanto a las tarifas de consulta, estos son reclamados por parte de los Indexadores, por lo que estos se consideran como finalizados. +- Epoch start or end block +- Query fees generated and indexing rewards collected during a specific epoch +- Epoch status, which refers to the query fee collection and distribution and can have different states: + - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees + - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. + - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. -![Imagen de Explorer 9](/img/Epoch-Stats.png) +![Explorer Image 9](/img/Epoch-Stats.png) -## Tu perfil de usuario +## Your User Profile -Ahora que hemos hablado de las estadísticas de la red, pasemos a tu perfil personal. Tu perfil personal es el lugar donde puedes ver tu actividad personal dentro de la red, sin importar cómo estés participando en la red. Tu billetera Ethereum actuará como tu perfil de usuario y desde tu panel de usuario (dashboard) podrás ver lo siguiente: +Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: -### Información general del perfil +### Profile Overview -Aquí es donde puedes ver las acciones actuales que realizaste. Aquí también podrás encontrar la información de tu perfil, la descripción y el sitio web (si agregaste uno). +This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). -![Imagen de Explorer 10](/img/Profile-Overview.png) +![Explorer Image 10](/img/Profile-Overview.png) -### Pestaña de subgrafos +### Subgraphs Tab -Si haces clic en la pestaña subgrafos, verás tus subgrafos publicados. Esto no incluirá ningún subgrafo implementado con la modalidad de CLI o con fines de prueba; los subgrafos solo aparecerán cuando se publiquen en la red descentralizada. +If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. -![Imagen de Explorer 11](/img/Subgraphs-Overview.png) +![Explorer Image 11](/img/Subgraphs-Overview.png) -### Pestaña de indexación +### Indexing Tab -Si haces clic en la pestaña Indexación, encontrarás una tabla con todas las asignaciones activas e históricas hacia los subgrafos, así como gráficos que puedes analizar y ver tu desempeño anterior como Indexador. +If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. -Esta sección también incluirá detalles sobre las recompensas netas que obtienes como Indexador y las tarifas netas que recibes por cada consulta. Verás las siguientes métricas: +This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: -- Delegated Stake: la participación de los Delegados que puedes asignar pero que no se puede recortar -- Total Query Fees: las tarifas totales que los usuarios han pagado por las consultas que has atendido durante tu participación -- Indexer Rewards: la cantidad total de recompensas que le Indexador ha recibido, se valora en GRT -- Fee Cut: es el porcentaje que obtendrás por las consultas que has atendido, estos se distribuyen al cerrar un ciclo o cuando te separes de tus delegadores -- Rewards Cut: este es el porcentaje de recompensas que dividirás con tus delegadores una vez se cierre el ciclo -- Owned: tu participación (stake) depositada, que podría reducirse por un comportamiento malicioso o incorrecto en la red +- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed +- Total Query Fees - the total fees that users have paid for queries served by you over time +- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT +- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators +- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators +- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior -![Imagen de Explorer 12](/img/Indexer-Stats.png) +![Explorer Image 12](/img/Indexer-Stats.png) -### Pestaña de delegación +### Delegating Tab -Los Delegadores son importantes para la red de The Graph. Un Delegador debe usar su conocimiento para elegir un Indexador que le proporcionará un retorno saludable y sostenibles. Aquí puedes encontrar detalles de tus delegaciones activas e históricas, junto con las métricas de los Indexadores a los que delegaste en el pasado. +Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. -En la primera mitad de la página, puedes ver tu gráfico de delegación, así como el gráfico de recompensas históricas. A la izquierda, puedes ver los KPI que reflejan tus métricas de delegación actuales. +In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. -Las métricas de Delegador que verás aquí en esta pestaña incluyen: +The Delegator metrics you’ll see here in this tab include: -- Recompensas totales de delegación (Total delegation rewards) -- Recompensas totales no realizadas (Total unrealized rewards) -- Recompensas totales realizadas (Total realized rewards) +- Total delegation rewards +- Total unrealized rewards +- Total realized rewards -En la segunda mitad de la página, tienes la tabla de delegaciones. Aquí puedes ver los Indexadores a los que delegaste, así como sus detalles (como recortes de recompensas, tiempo de enfriamiento, etc.). +In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). -Con los botones en el lado derecho de la tabla, puede administrar su delegación: delegar más, quitar su delegación o retirar su delegación después del período de descongelación. +With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. -Con los botones situados al lado derecho de la tabla, puedes administrar tu delegación: delegar más, anular la delegación actual o retirar tu delegación después del período de descongelación. +Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). -![Imagen de Explorer 13](/img/Delegation-Stats.png) +![Explorer Image 13](/img/Delegation-Stats.png) -### Pestaña de curación +### Curating Tab -En la pestaña Curación, encontrarás todos los subgrafos a los que estás señalando (lo que te permite recibir tarifas de consulta). La señalización permite a los Curadores destacar un subgrafo importante y fiable a los Indexadores, dándoles a entender que debe ser indexado. +In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. -Dentro de esta pestaña, encontrarás una descripción general de: +Within this tab, you’ll find an overview of: -- Todos los subgrafos que estás curando con detalles de la señalización actual -- Participaciones totales en cada subgrafo -- Recompensas de consulta por cada subgrafo -- Actualizaciones de los subgrafos +- All the subgraphs you're curating on with signal details +- Share totals per subgraph +- Query rewards per subgraph +- Updated at date details -![Imagen de Explorer 14](/img/Curation-Stats.png) +![Explorer Image 14](/img/Curation-Stats.png) -## Configuración de tu perfil +## Your Profile Settings -Dentro de tu perfil de usuario, podrás administrar los detalles de tu perfil personal (como configurar un nombre de ENS). Si eres un Indexador, tienes aún más acceso a la configuración al alcance de tu mano. En tu perfil de usuario, podrás configurar los parámetros y operadores de tu delegación. +Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. -- Los operadores toman acciones limitadas en el protocolo en nombre del Indexador, como abrir y cerrar asignaciones. Los operadores suelen ser otras direcciones de Ethereum, separadas de su billetera de staking, con acceso cerrado a la red que los Indexadores pueden configurar personalmente -- Los parámetros de delegación te permiten controlar la distribución de GRT entre tu y tus Delegadores. +- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set +- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. -![Imagen de Explorer 15](/img/Profile-Settings.png) +![Explorer Image 15](/img/Profile-Settings.png) -Como tu portal oficial en el mundo de los datos descentralizados, The Graph Explorer te permite realizar una variedad de acciones, sin importar tu rol en la red. Puedes acceder a la configuración de tu perfil abriendo el menú desplegable junto a tu dirección y luego haciendo clic en el botón de configuración (settings). +As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.
![Wallet details](/img/Wallet-Details.png)
From b0406d1bc8f4f16171d9eb5bed7708e5c5b4988f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:56 -0500 Subject: [PATCH 116/241] New translations explorer.mdx (Arabic) --- pages/ar/explorer.mdx | 240 +++++++++++++++++++++--------------------- 1 file changed, 120 insertions(+), 120 deletions(-) diff --git a/pages/ar/explorer.mdx b/pages/ar/explorer.mdx index ae31b016d8a4..c8df28cfe03f 100644 --- a/pages/ar/explorer.mdx +++ b/pages/ar/explorer.mdx @@ -1,14 +1,14 @@ --- -title: مستكشف +title: The Graph Explorer --- -مرحبا بك في مستكشف Graph ، أو كما نحب أن نسميها ، بوابتك اللامركزية في عالم subgraphs وبيانات الشبكة. 👩🏽‍🚀 مستكشف TheGraph يتكون من عدة اجزاء حيث يمكنك التفاعل مع مطوري Subgraphs الاخرين ، ومطوري dApp ،والمنسقين والمفهرسين، والمفوضين. للحصول على نظرة عامة حول the Graph Explorer، راجع الفيديو أدناه (أو تابع القراءة أدناه): +Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below):
@@ -16,196 +16,196 @@ title: مستكشف ## Subgraphs -أولا ، إذا انتهيت من نشر Subgraphs الخاص بك في Subgraph Studio ، فإن علامة التبويب Subgraphs في الجزء العلوي من شريط التنقل هي المكان المناسب لعرض Subgraphs الخاصة بك (و Subgraphs الآخرين) على الشبكة اللامركزية. هنا ، ستتمكن من العثور على Subgraphs الذي تبحث عنه بدقة بناء على تاريخ الإنشاء أو مقدار الإشارة(signal amount) أو الاسم. +First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. -![صورة المستكشف 1](/img/Subgraphs-Explorer-Landing.png) +![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -عند النقر على Subgraphs ، يمكنك اختبار الاستعلامات وستكون قادرا على الاستفادة من تفاصيل الشبكة لاتخاذ قرارات صائبة. سيمكنك ايضا من الإشارة إلى GRT على Subgraphs الخاص بك أو subgraphs الآخرين لجعل المفهرسين على علم بأهميته وجودته. هذا أمر مهم جدا وذلك لأن الإشارة ل Subgraphs تساعد المفهرسين في اختيار ذلك ال Subgraph لفهرسته ، مما يعني أنه سيظهر على الشبكة لتقديم الاستعلامات. +When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. -![صورة المستكشف 2](/img/Subgraph-Details.png) +![Explorer Image 2](/img/Subgraph-Details.png) -في كل صفحة مخصصة ل subgraphs ، تظهر العديد من التفاصيل. وهذا يتضمن +On each subgraph’s dedicated page, several details are surfaced. These include: -- أشر/الغي الإشارة على Subgraphs -- اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى -- بدّل بين الإصدارات وذلك لاستكشاف التكرارات السابقة ل subgraphs -- استعلم عن subgraphs عن طريق GraphQL -- اختبار subgraphs في playground -- اعرض المفهرسين الذين يفهرسون Subgraphs معين -- إحصائيات subgraphs (المخصصات ، المنسقين ، إلخ) -- اعرض من قام بنشر ال Subgraphs +- Signal/Un-signal on subgraphs +- View more details such as charts, current deployment ID, and other metadata +- Switch versions to explore past iterations of the subgraph +- Query subgraphs via GraphQL +- Test subgraphs in the playground +- View the Indexers that are indexing on a certain subgraph +- Subgraph stats (allocations, Curators, etc) +- View the entity who published the subgraph -![صورة المستكشف 3](/img/Explorer-Signal-Unsignal.png) +![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## المشاركون +## Participants -ضمن علامة التبويب هذه ، ستحصل على نظرة شاملة لجميع الأشخاص المشاركين في أنشطة الشبكة ، مثل المفهرسين والمفوضين Delegators والمنسقين Curators. سندخل في نظرة شاملة أدناه لما تعنيه كل علامة تبويب. +Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. -### 2. المنسقون Curators +### 1. Indexers -![صورة المستكشف 4](/img/Indexer-Pane.png) +![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. دعونا نبدأ مع المفهرسين المفهرسون هم العمود الفقري للبروتوكول ، كونهم بقومون بفهرسة ال Subgraph ، وتقديم الاستعلامات إلى أي شخص يستخدم subgraphs. في جدول المفهرسين ، يمكنك رؤية البارامترات الخاصة بتفويض المفهرسين ، وحصتهم ، ومقدار ما قاموا بتحصيله في كل subgraphs ، ومقدار الإيرادات التي حصلو عليها من رسوم الاستعلام ومكافآت الفهرسة. Deep dives below: +Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: -- اقتطاع رسوم الاستعلام Query Fee Cut - هي النسبة المئوية لخصم رسوم الاستعلام والتي يحتفظ بها المفهرس عند التقسيم مع المفوضين Delegators -- اقتطاع المكافأة الفعالة Effective Reward Cut - هو اقتطاع مكافأة الفهرسة indexing reward cut المطبقة على مجموعة التفويضات. إذا كانت سالبة ، فهذا يعني أن المفهرس يتنازل عن جزء من مكافآته. إذا كانت موجبة، فهذا يعني أن المفهرس يحتفظ ببعض مكافآته -- فترة التهدئة Cooldown المتبقية - هو الوقت المتبقي حتى يتمكن المفهرس من تغيير بارامترات التفويض. يتم إعداد فترات التهدئة من قبل المفهرسين عندما يقومون بتحديث بارامترات التفويض الخاصة بهم -- مملوكة Owned - هذه هي حصة المفهرس المودعة ، والتي قد يتم شطبها بسبب السلوك الضار أو غير الصحيح -- مفوضة Delegated - هي حصة مفوضة من قبل المفوضين والتي يمكن تخصيصها بواسطة المفهرس ، لكن لا يمكن شطبها -- مخصصة Allocated - حصة يقوم المفهرسون بتخصيصها بشكل نشط نحو subgraphs التي يقومون بفهرستها -- سعة التفويض المتاحة Available Delegation Capacity - هو مقدار الحصة المفوضة التي يمكن للمفهرسين تلقيها قبل الوصول للحد الأقصى لتلقي التفويضات overdelegated -- سعة التفويض القصوى Max Delegation Capacity - هي الحد الأقصى من الحصة المفوضة التي يمكن للمفهرس قبولها. لا يمكن استخدام الحصة المفوضة الزائدة للمخصصات allocations أو لحسابات المكافآت. -- رسوم الاستعلام Query Fees - هذا هو إجمالي الرسوم التي دفعها المستخدمون للاستعلامات التي يقدمها المفهرس طوال الوقت -- مكافآت المفهرس Indexer Rewards - هو مجموع مكافآت المفهرس التي حصل عليها المفهرس ومفوضيهم Delegators. تدفع مكافآت المفهرس ب GRT. +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated +- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -يمكن للمفهرسين كسب كلا من رسوم الاستعلام ومكافآت الفهرسة. يحدث هذا عندما يقوم المشاركون في الشبكة بتفويض GRT للمفهرس. يتيح ذلك للمفهرسين تلقي رسوم الاستعلام ومكافآت بناء على بارامترات المفهرس الخاصة به. يتم تعيين بارامترات الفهرسة عن طريق النقر على الجانب الأيمن من الجدول ، أو بالانتقال إلى ملف تعريف المفهرس والنقر فوق زر "Delegate". +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. -لمعرفة المزيد حول كيفية أن تصبح مفوضا كل ما عليك فعله هو التوجه إلى [ الوثائق الرسمية ](/delegating) أو [ أكاديمية The Graph ](https://docs.thegraph.academy/network/delegators). +To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) ![Indexing details pane](/img/Indexing-Details-Pane.png) -### 3. المفوضون Delegators +### 2. Curators -يقوم المنسقون بتحليل ال subgraphs لتحديد ال subgraphs ذات الجودة الأعلى. عندما يجد المنسق subgraph يراه جيدا ،فيمكنه تنسيقه من خلال الإشارة إلى منحنى الترابط الخاص به. وبهذا يسمح المنسقون للمفهرسين بمعرفة ماهي ال subgraphs عالية الجودة والتي يجب فهرستها. +Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. -يمكن للمنسقين أن يكونوا من أعضاء المجتمع أو من مستخدمي البيانات أو حتى من مطوري ال subgraph والذين يشيرون إلى ال subgraphs الخاصة بهم وذلك عن طريق إيداع توكن GRT في منحنى الترابط. وبإيداع GRT ، يقوم المنسقون بصك أسهم التنسيق في ال subgraph. نتيجة لذلك ، يكون المنسقون مؤهلين لكسب جزء من رسوم الاستعلام التي يُنشئها ال subgraph المشار إليها. يساعد منحنى الترابط المنسقين على تنسيق مصادر البيانات الأعلى جودة. جدول المنسق في هذا القسم سيسمح لك برؤية: +Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: -- التاريخ الذي بدأ فيه المنسق بالتنسق -- عدد ال GRT الذي تم إيداعه -- عدد الأسهم التي يمتلكها المنسق +- The date the Curator started curating +- The number of GRT that was deposited +- The number of shares a Curator owns -![صورة المستكشف 6](/img/Curation-Overview.png) +![Explorer Image 6](/img/Curation-Overview.png) -إذا كنت تريد معرفة المزيد عن دور المنسق ، فيمكنك القيام بذلك عن طريق زيارة الروابط التالية ـ [ أكاديمية The Graph ](https://thegraph.academy/curators/) أو \[ الوثائق الرسمية. \](/ curating) +If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) -### 3. المفوضون Delegators +### 3. Delegators -يلعب المفوضون دورا رئيسيا في الحفاظ على الأمن واللامركزية في شبكة The Graph. يشاركون في الشبكة عن طريق تفويض (أي ، "Staking") توكن GRT إلى مفهرس واحد أو أكثر. بدون المفوضين، من غير المحتمل أن يربح المفهرسون مكافآت ورسوم مجزية. لذلك ، يسعى المفهرسون إلى جذب المفوضين من خلال منحهم جزءا من مكافآت الفهرسة ورسوم الاستعلام التي يكسبونها. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. -يقوم المفوضون بدورهم باختيار المفهرسين بناء على عدد من المتغيرات المختلفة ، مثل الأداء السابق ، ومعدلات مكافأة الفهرسة ، واقتطاع رسوم الاستعلام query fee cuts. يمكن أن تلعب السمعة داخل المجتمع دورا في هذا! يوصى بالتواصل مع المفهرسين المختارين عبر [ The Graph's Discord ](https://thegraph.com/discord) أو [ منتدى The Graph ](https://forum.thegraph.com/)! +Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! -![صورة المستكشف 7](/img/Delegation-Overview.png) +![Explorer Image 7](/img/Delegation-Overview.png) -جدول المفوضين سيسمح لك برؤية المفوضين النشطين في المجتمع ، بالإضافة إلى مقاييس مثل: +The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: -- عدد المفهرسين المفوض إليهم -- التفويض الأصلي للمفوض Delegator’s original delegation -- المكافآت التي جمعوها والتي لم يسحبوها من البروتوكول -- المكافآت التي تم سحبها من البروتوكول -- كمية ال GRT التي يمتلكونها حاليا في البروتوكول -- تاريخ آخر تفويض لهم +- The number of Indexers a Delegator is delegating towards +- A Delegator’s original delegation +- The rewards they have accumulated but have not withdrawn from the protocol +- The realized rewards they withdrew from the protocol +- Total amount of GRT they have currently in the protocol +- The date they last delegated at -If you want to learn more about how to become a Delegator, look no further! لمعرفة المزيد حول كيفية أن تصبح مفهرسا ، يمكنك إلقاء نظرة على [ الوثائق الرسمية ](/indexing) أو [ دليل مفهرس أكاديمية The Graph. +If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). -## الشبكة Network +## Network -في قسم الشبكة ، سترى KPIs بالإضافة إلى القدرة على التبديل بين الفترات وتحليل مقاييس الشبكة بشكل مفصل. ستمنحك هذه التفاصيل فكرة عن كيفية أداء الشبكة بمرور الوقت. +In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### النشاط Activity +### Activity -يحتوي قسم النشاط على جميع مقاييس الشبكة الحالية بالإضافة إلى بعض المقاييس المتراكمة بمرور الوقت. هنا يمكنك رؤية أشياء مثل: +The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: -- إجمالي حصة الشبكة الحالية -- الحصة المقسمة بين المفهرسين ومفوضيهم -- إجمالي العرض ،و الصك ،وال GRT المحروقة منذ بداية الشبكة -- إجمالي مكافآت الفهرسة منذ بداية البروتوكول -- بارامترات البروتوكول مثل مكافأة التنسيق ومعدل التضخم والمزيد -- رسوم ومكافآت الفترة الحالية +- The current total network stake +- The stake split between the Indexers and their Delegators +- Total supply, minted, and burned GRT since the network inception +- Total Indexing rewards since the inception of the protocol +- Protocol parameters such as curation reward, inflation rate, and more +- Current epoch rewards and fees -بعض التفاصيل الأساسية الجديرة بالذكر: +A few key details that are worth mentioning: -- ** رسوم الاستعلام هي الرسوم التي يولدها المستخدمون** ،ويمكن للمفهرسين المطالبة بها (أو لا) بعد مدة لا تقل عن 7 فترات (انظر أدناه) بعد إغلاق مخصصاتهم لل subgraphs والتحقق من صحة البيانات التي قدموها من قبل المستخدمين. -- ** مكافآت الفهرسة هي مقدار المكافآت التي حصل عليها المفهرسون من انتاجات الشبكة خلال الفترة. ** على الرغم من أن انتاجات البروتوكول ثابتة إلا أنه لا يتم صك المكافآت إلا بعد إغلاق المفهرسين لمخصصاتهم ل subgraphs التي قاموا بفهرستها. وبالتالي ، يختلف عدد المكافآت لكل فترة (على سبيل المثال ، خلال بعض الفترات ، ربما يكون المفهرسون قد أغلقوا المخصصات التي كانت مفتوحة لعدة أيام). +- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). -![صورة المستكشف 8](/img/Network-Stats.png) +![Explorer Image 8](/img/Network-Stats.png) -### الفترات Epochs +### Epochs -في قسم الفترات، يمكنك تحليل مقاييس كل فترة مثل: +In the Epochs section you can analyse on a per-epoch basis, metrics such as: -- بداية الفترة أو نهايتها -- مكافآت رسوم الاستعلام والفهرسة التي تم جمعها خلال فترة معينة -- حالة الفترة، والتي تشير إلى رسوم الاستعلام وتوزيعها ويمكن أن يكون لها حالات مختلفة: - - الفترة النشطة هي الفترة التي يقوم فيها المفهرسون حاليا بتخصيص الحصص وتحصيل رسوم الاستعلام - - فترات التسوية هي تلك الفترات التي يتم فيها تسوية قنوات الحالة state channels. هذا يعني أن المفهرسين يكونون عرضة للشطب إذا فتح المستخدمون اعتراضات ضدهم. - - فترات التوزيع هي تلك الفترات التي يتم فيها تسوية قنوات الحالة للفترات ويمكن للمفهرسين المطالبة بخصم رسوم الاستعلام الخاصة بهم. - - الفترات النهائية هي تلك الفترات التي ليس بها خصوم متبقية على رسوم الاستعلام للمطالبة بها من قبل المفهرسين ، وبالتالي يتم الانتهاء منها. +- Epoch start or end block +- Query fees generated and indexing rewards collected during a specific epoch +- Epoch status, which refers to the query fee collection and distribution and can have different states: + - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees + - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. + - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. -![صورة المستكشف 9](/img/Epoch-Stats.png) +![Explorer Image 9](/img/Epoch-Stats.png) -## ملف تعريف المستخدم الخاص بك +## Your User Profile -الآن بعد أن تحدثنا عن احصائيات الشبكة ، دعنا ننتقل إلى ملفك الشخصي. ملفك الشخصي هو المكان المناسب لك لمشاهدة نشاط الشبكة ، بغض النظر عن كيفية مشاركتك في الشبكة. ستعمل محفظة Ethereum الخاصة بك كملف تعريف المستخدم الخاص بك ، وباستخدام User Dashboard، ستتمكن من رؤية: +Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: -### نظرة عامة على الملف الشخصي +### Profile Overview -هذا هو المكان الذي يمكنك فيه رؤية الإجراءات الحالية التي اتخذتها. وأيضا هو المكان الذي يمكنك فيه العثور على معلومات ملفك الشخصي والوصف وموقع الويب (إذا قمت بإضافته). +This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). -![صورة المستكشف 10](/img/Profile-Overview.png) +![Explorer Image 10](/img/Profile-Overview.png) -### تبويب ال Subgraphs +### Subgraphs Tab -إذا قمت بالنقر على تبويب Subgraphs ، فسترى ال subgraphs المنشورة الخاصة بك. لن يشمل ذلك أي subgraphs تم نشرها ب CLI لأغراض الاختبار - لن تظهر ال subgraphs إلا عند نشرها على الشبكة اللامركزية. +If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. -![صورة المستكشف 11](/img/Subgraphs-Overview.png) +![Explorer Image 11](/img/Subgraphs-Overview.png) -### تبويب الفهرسة +### Indexing Tab -إذا قمت بالنقر على تبويب الفهرسة "Indexing " ، فستجد جدولا به جميع المخصصات النشطة والتاريخية ل subgraphs ، بالإضافة إلى المخططات التي يمكنك تحليلها ورؤية أدائك السابق كمفهرس. +If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. -هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية: +This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: -- الحصة المفوضة Delegated Stake - هي الحصة المفوضة من قبل المفوضين والتي يمكنك تخصيصها ولكن لا يمكن شطبها -- إجمالي رسوم الاستعلام Total Query Fees - هو إجمالي الرسوم التي دفعها المستخدمون مقابل الاستعلامات التي قدمتها بمرور الوقت -- مكافآت المفهرس Indexer Rewards - هو المبلغ الإجمالي لمكافآت المفهرس التي تلقيتها ك GRT -- اقتطاع الرسوم Fee Cut -هي النسبة المئوية لخصوم رسوم الاستعلام التي ستحتفظ بها عند التقسيم مع المفوضين -- اقتطاع المكافآت Rewards Cut -هي النسبة المئوية لمكافآت المفهرس التي ستحتفظ بها عند التقسيم مع المفوضين -- مملوكة Owned - هي حصتك المودعة ، والتي يمكن شطبها بسبب السلوك الضار أو غير الصحيح +- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed +- Total Query Fees - the total fees that users have paid for queries served by you over time +- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT +- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators +- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators +- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior -![صورة المستكشف 12](/img/Indexer-Stats.png) +![Explorer Image 12](/img/Indexer-Stats.png) -### تبويب التفويض Delegating Tab +### Delegating Tab -المفوضون مهمون لشبكة the Graph. يجب أن يستخدم المفوض معرفته لاختيار مفهرسا يوفر عائدا على المكافآت. هنا يمكنك العثور على تفاصيل تفويضاتك النشطة والتاريخية ، مع مقاييس المفهرسين الذين قمت بتفويضهم. +Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. -في النصف الأول من الصفحة ، يمكنك رؤية مخطط التفويض الخاص بك ، بالإضافة إلى مخطط المكافآت فقط. إلى اليسار ، يمكنك رؤية KPIs التي تعكس مقاييس التفويض الحالية. +In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. -مقاييس التفويض التي ستراها هنا في علامة التبويب هذه تشمل ما يلي: +The Delegator metrics you’ll see here in this tab include: -- إجمالي مكافآت التفويض -- إجمالي المكافآت الغير محققة -- إجمالي المكافآت المحققة +- Total delegation rewards +- Total unrealized rewards +- Total realized rewards -في النصف الثاني من الصفحة ، لديك جدول التفويضات. هنا يمكنك رؤية المفهرسين الذين فوضتهم ، بالإضافة إلى تفاصيلهم (مثل المكافآت المقتطعة rewards cuts، و cooldown ، الخ). +In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. -باستخدام الأزرار الموجودة على الجانب الأيمن من الجدول ، يمكنك إدارة تفويضاتك أو تفويض المزيد أو إلغاء التفويض أو سحب التفويض بعد فترة الذوبان thawing. +Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). -![صورة المستكشف 13](/img/Delegation-Stats.png) +![Explorer Image 13](/img/Delegation-Stats.png) -### تبويب التنسيق Curating +### Curating Tab -في علامة التبويب Curation ، ستجد جميع ال subgraphs التي تشير إليها (مما يتيح لك تلقي رسوم الاستعلام). الإشارة تسمح للمنسقين التوضيح للمفهرسين ماهي ال subgraphs ذات الجودة العالية والموثوقة ، مما يشير إلى ضرورة فهرستها. +In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. -ضمن علامة التبويب هذه ، ستجد نظرة عامة حول: +Within this tab, you’ll find an overview of: -- جميع ال subgraphs التي تقوم بتنسيقها مع تفاصيل الإشارة -- إجمالي الحصة لكل subgraph -- مكافآت الاستعلام لكل subgraph +- All the subgraphs you're curating on with signal details +- Share totals per subgraph +- Query rewards per subgraph - Updated at date details -![صورة المستكشف 14](/img/Curation-Stats.png) +![Explorer Image 14](/img/Curation-Stats.png) -## إعدادات ملف التعريف الخاص بك +## Your Profile Settings -ضمن ملف تعريف المستخدم الخاص بك ، ستتمكن من إدارة تفاصيل ملفك الشخصي (مثل إعداد اسم ENS). إذا كنت مفهرسا ، فستستطيع الوصول إلى إعدادت أكثر. في ملف تعريف المستخدم الخاص بك ، ستتمكن من إعداد بارامترات التفويض والمشغلين. +Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. -- Operators تتخذ إجراءات محدودة في البروتوكول نيابة عن المفهرس ، مثل عمليات فتح وإغلاق المخصصات. Operators هي عناوين Ethereum أخرى ، منفصلة عن محفظة staking الخاصة بهم ، مع بوابة وصول للشبكة التي يمكن للمفهرسين تعيينها بشكل شخصي -- تسمح لك بارامترات التفويض بالتحكم في توزيع GRT بينك وبين المفوضين. +- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set +- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. -![صورة المستكشف 15](/img/Profile-Settings.png) +![Explorer Image 15](/img/Profile-Settings.png) -كبوابتك الرسمية إلى عالم البيانات اللامركزية ، يتيح لك Graph Explorer اتخاذ مجموعة متنوعة من الإجراءات ، بغض النظر عن دورك في الشبكة. يمكنك الوصول إلى إعدادات ملفك الشخصي عن طريق فتح القائمة المنسدلة بجوار عنوانك ، ثم النقر على زر Settings. +As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. -
تفاصيل المحفظة
+
![Wallet details](/img/Wallet-Details.png)
From 93728d966354357aae495500d717e59c4726a516 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:57 -0500 Subject: [PATCH 117/241] New translations explorer.mdx (Japanese) --- pages/ja/explorer.mdx | 246 +++++++++++++++++++++--------------------- 1 file changed, 123 insertions(+), 123 deletions(-) diff --git a/pages/ja/explorer.mdx b/pages/ja/explorer.mdx index c0ed9a036920..c8df28cfe03f 100644 --- a/pages/ja/explorer.mdx +++ b/pages/ja/explorer.mdx @@ -1,211 +1,211 @@ --- -title: エクスプローラー +title: The Graph Explorer --- -グラフエクスプローラーは、サブグラフとネットワークデータの世界への分散型ポータルです。 👩🏽‍🚀 グラフエクスプローラーは、他のサブグラフ開発者、dapp開発者、キュレーター、インデクサー、デリゲーターと交流できる複数のパートで構成されています。 グラフエクスプローラーの概要については、以下のビデオをご覧ください。 +Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below):
-## サブグラフ +## Subgraphs -まず最初に、ナビゲーションバーの上部にある「Subgraphs」タブは、分散型ネットワーク上の自分の完成したサブグラフ(および他の人のサブグラフ)を見るための場所です。 ここでは、作成日、シグナル量、名前などから、探しているサブグラフを見つけることができます。 +First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. -![エクスプローラーイメージ 1](/img/Subgraphs-Explorer-Landing.png) +![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -サブグラフをクリックすると、プレイグラウンドでクエリをテストすることができ、ネットワークの詳細を活用して情報に基づいた意思決定を行うことができます。 また、自分のサブグラフや他の人のサブグラフで GRT をシグナリングして、その重要性や品質をインデクサに認識させることができます。 これは、サブグラフにシグナルを送ることで、そのサブグラフがインデックス化され、最終的にクエリに対応するためにネットワーク上に現れてくることを意味するため、非常に重要です。 +When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. -![エクスプローラーイメージ 2](/img/Subgraph-Details.png) +![Explorer Image 2](/img/Subgraph-Details.png) -各サブグラフの専用ページでは、いくつかの詳細が表示されます。 その内容は以下の通りです: +On each subgraph’s dedicated page, several details are surfaced. These include: -- サブグラフのシグナル/アンシグナル -- チャート、現在のデプロイメント ID、その他のメタデータなどの詳細情報の表示 -- バージョンを切り替えて、サブグラフの過去のイテレーションを調べる -- GraphQL によるサブグラフのクエリ -- プレイグラウンドでのサブグラフのテスト -- 特定のサブグラフにインデクシングしているインデクサーの表示 -- サブグラフの統計情報(割り当て数、キュレーターなど) -- サブグラフを公開したエンティティの表示 +- Signal/Un-signal on subgraphs +- View more details such as charts, current deployment ID, and other metadata +- Switch versions to explore past iterations of the subgraph +- Query subgraphs via GraphQL +- Test subgraphs in the playground +- View the Indexers that are indexing on a certain subgraph +- Subgraph stats (allocations, Curators, etc) +- View the entity who published the subgraph -![エクスプローラーイメージ 3](/img/Explorer-Signal-Unsignal.png) +![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## 参加者 +## Participants -このタブでは、Indexer、Delegator、Curators など、ネットワークアクティビティに参加している全ての人を俯瞰できます。 以下では、各タブの意味を詳しく説明します。 +Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. -### 1. インデクサー(Indexers) +### 1. Indexers -![エクスプローラーイメージ 4](/img/Indexer-Pane.png) +![Explorer Image 4](/img/Indexer-Pane.png) -まず、インデクサーから説明します。 インデクサーはプロトコルのバックボーンであり、サブグラフに利害関係を持ち、インデックスを作成し、サブグラフを消費する人にクエリを提供します。 インデクサーテーブルでは、インデクサーのデリゲーションパラメータ、ステーク、各サブグラフへのステーク量、クエリフィーとインデクシング報酬による収益を確認することができます。 詳細は以下のとおりです: +Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: -- Query Fee Cut - デリゲーターとの分配時にインデクサーが保持するクエリーフィーリベートの割合 -- Effective Reward Cut - デリゲーションプールに適用されるインデックス報酬のカット。 これがマイナスの場合、インデクサーが報酬の一部を手放していることを意味します。 プラスの場合は、インデクサーが報酬の一部を保持していることを意味します -- Cooldown Remaining - インデクサーが上記のデリゲーションパラメータを変更できるようになるまでの残り時間です。 クールダウン期間は、インデクサーがデリゲーションパラメータを更新する際に設定します -- Owned - インデクサーが預けているステークで、悪意のある行為や不正な行為があった場合にスラッシュされる可能性があります -- Delegated - デリゲーターからのステークで、インデクサーが割り当てることができるが、スラッシュはできません -- Allocated - インデックスを作成中のサブグラフに対してインデクサーが割り当てているステーク額 -- Available Delegation Capacity - 過剰デリゲーションになる前に、インデクサーが受け取ることができるデリゲーション・ステーク量 -- Max Delegation Capacity - インデクサーが生産的に受け取ることができるデリゲーション・ステークの最大量。 過剰なデリゲーション・ステークは割り当てや報酬の計算には使用できません -- Query Fees - あるインデクサーのクエリに対してエンドユーザーが支払った手数料の合計額です -- Indexer Rewards - インデクサーとそのデリゲーターが過去に獲得したインデクサー報酬の総額。 インデクサー報酬は GRT の発行によって支払われます +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated +- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -インデクサーはクエリ報酬とインデックス報酬の両方を得ることができます。 機能的には、ネットワーク参加者が GRT をインデクサーにデリゲーションしたときに発生します。 これにより、インデクサーはそのインデクサーパラメータに応じてクエリフィーや報酬を受け取ることができます。 インデックスパラメータの設定は、表の右側をクリックするか、インデクサーのプロフィールにアクセスして「Delegate」ボタンをクリックすることで行うことができます。 +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. -インデクサーになる方法については、公式ドキュメントや The Graph Academy のインデクサーガイドを参考にしてください。 +To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) -![インデックス作成の詳細](/img/Indexing-Details-Pane.png) +![Indexing details pane](/img/Indexing-Details-Pane.png) -### 2. キュレーター +### 2. Curators -キュレーターはサブグラフを分析し、どのサブグラフが最高品質であるかを特定します。 キュレーターが魅力的なサブグラフを見つけたら、そのボンディングカーブにシグナルを送ることでキュレーションすることができます。 そうすることで、キュレーターはインデクサーにどのサブグラフが高品質であり、インデックスを作成すべきかを知らせることができます。 +Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. -キュレーターはコミュニティのメンバー、データ消費者、あるいはサブグラフの開発者でもあり、GRT トークンをボンディングカーブに預けることで自分のサブグラフにシグナルを送ります。 GRT を預け入れることで、キュレーターはサブグラフのキュレーションシェアを獲得します。 その結果、キュレーターは、自分がシグナルを送ったサブグラフが生成したクエリフィーの一部を得ることができます。 ボンディングカーブは、キュレーターが最高品質のデータソースをキュレーションする動機付けとして機能します。 このセクションの「Curator」テーブルでは、以下を確認することができます: +Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: -- キュレーターがキュレーションを開始した日付 -- デポジットされた GRT の数 -- キュレーターが所有するシェア数 +- The date the Curator started curating +- The number of GRT that was deposited +- The number of shares a Curator owns -![エクスプローラーイメージ 6](/img/Curation-Overview.png) +![Explorer Image 6](/img/Curation-Overview.png) -キュレーターの役割についてさらに詳しく知りたい場合は、[The Graph Academy](https://thegraph.academy/curators/) か [official documentation.](/curating)を参照してください。 +If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) -### 3. デリゲーター +### 3. Delegators -デリゲーターは、グラフネットワークの安全性と分散性を維持するための重要な役割を担っています。 デリゲーターは、GRT トークンを 1 人または複数のインデクサーにデリゲート(=「ステーク」)することでネットワークに参加します。 デリゲーターがいなければ、インデクサーは大きな報酬や手数料を得ることができません。 そのため、インデクサーは獲得したインデクシング報酬やクエリフィーの一部をデリゲーターに提供することで、デリゲーターの獲得を目指します。 +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. -一方、デリゲーターは、過去の実績、インデックス作成報酬率、クエリ手数料のカット率など、さまざまな変数に基づいてインデクサーを選択します。 また、コミュニティ内での評判も関係してきます。 選ばれたインデクサーとは、 [The Graph’s Discord](https://thegraph.com/discord) や [The Graph Forum](https://forum.thegraph.com/)でつながることをお勧めします。 +Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! -![エクスプローラーイメージ 7](/img/Delegation-Overview.png) +![Explorer Image 7](/img/Delegation-Overview.png) -「Delegators」テーブルでは、コミュニティ内のアクティブなデリゲーターを確認できるほか、以下のような指標も確認できます: +The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: -- デリゲーターがデリゲーションしているインデクサー数 -- デリゲーターの最初のデリゲーション内容 -- デリゲーターが蓄積したがプロトコルから引き出していない報酬 -- プロトコルから撤回済みの報酬 -- 現在プロトコルに保持している GRT 総量 -- 最後にデリゲートした日 +- The number of Indexers a Delegator is delegating towards +- A Delegator’s original delegation +- The rewards they have accumulated but have not withdrawn from the protocol +- The realized rewards they withdrew from the protocol +- Total amount of GRT they have currently in the protocol +- The date they last delegated at -デリゲーターになるための方法をもっと知りたい方は、ぜひご覧ください。 [official documentation](/delegating) や [The Graph Academy](https://docs.thegraph.academy/network/delegators)にアクセスしてください。 +If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). -## ネットワーク +## Network -「Network」セクションでは、グローバルな KPI に加えて、エポック単位に切り替えてネットワークメトリクスをより詳細に分析する機能があります。 これらの詳細を見ることで、ネットワークが時系列でどのようなパフォーマンスをしているかを知ることができます。 +In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### アクティビティ +### Activity -アクティビティセクションには、現在のすべてのネットワークメトリクスと、時系列の累積メトリクスが表示されます。 ここでは、以下のようなことがわかります: +The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: -- 現在のネットワーク全体のステーク額 -- インデクサーとデリゲーター間のステーク配分 -- ネットワーク開始以来の総供給量、ミント量、バーン GRT -- プロトコルの開始以降のインデックス報酬総額 -- キュレーション報酬、インフレーション・レートなどのプロトコルパラメータ -- 現在のエポックの報酬と料金 +- The current total network stake +- The stake split between the Indexers and their Delegators +- Total supply, minted, and burned GRT since the network inception +- Total Indexing rewards since the inception of the protocol +- Protocol parameters such as curation reward, inflation rate, and more +- Current epoch rewards and fees -特筆すべき重要な詳細をいくつか挙げます: +A few key details that are worth mentioning: -- **クエリフィーは消費者によって生成された報酬を表し**、サブグラフへの割り当てが終了し、提供したデータが消費者によって検証された後、少なくとも 7 エポック(下記参照)の期間後にインデクサが請求することができます(または請求しないこともできます)。 -- **Iインデックス報酬は、エポック期間中にインデクサーがネットワーク発行から請求した報酬の量を表しています。**プロトコルの発行は固定されていますが、報酬はインデクサーがインデックスを作成したサブグラフへの割り当てを終了して初めてミントされます。 そのため、エポックごとの報酬数は変動します(例えば、あるエポックでは、インデクサーが何日も前から開いていた割り当てをまとめて閉じたかもしれません)。 +- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). -![エクスプローラーイメージ 8](/img/Network-Stats.png) +![Explorer Image 8](/img/Network-Stats.png) -### エポック +### Epochs -エポックセクションでは、エポックごとに以下のようなメトリクスを分析できます: +In the Epochs section you can analyse on a per-epoch basis, metrics such as: -- エポックの開始または終了ブロック -- 特定のエポックで発生したクエリーフィーと収集されたインデクシングリワード -- エポックステータス: クエリフィーの徴収と分配に関するもので、さまざまな状態がある - - アクティブエポックとは、インデクサーが現在ステークを割り当て、クエリフィーを収集しているエポックのこと - - 決済エポックとは、状態のチャンネルを決済しているエポックのこと。 つまり、消費者がインデクサーに対して異議を唱えた場合、インデクサーはスラッシュされる可能性があるということ - - 分配エポックとは、そのエポックの状態チャンネルが確定し、インデクサーがクエリフィーのリベートを請求できるようになるエポックのこと - - 確定したエポックとは、インデクサーが請求できるクエリフィーのリベートが残っていないエポックのことで、確定している +- Epoch start or end block +- Query fees generated and indexing rewards collected during a specific epoch +- Epoch status, which refers to the query fee collection and distribution and can have different states: + - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees + - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. + - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. -![エクスプローラーイメージ 9](/img/Epoch-Stats.png) +![Explorer Image 9](/img/Epoch-Stats.png) -## ユーザープロファイル +## Your User Profile -ネットワーク統計について説明しましたが、次は個人のプロフィールについて説明します。 個人プロフィールは、ネットワークにどのように参加しているかに関わらず、自分のネットワーク活動を確認するための場所です。 あなたの Ethereum ウォレットがあなたのユーザープロフィールとして機能し、ユーザーダッシュボードで確認することができます。 +Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: -### プロフィールの概要 +### Profile Overview -ここでは、あなたが現在行ったアクションを確認できます。 また、自分のプロフィール情報、説明、ウェブサイト(追加した場合)もここに表示されます。 +This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). -![エクスプローラーイメージ 10](/img/Profile-Overview.png) +![Explorer Image 10](/img/Profile-Overview.png) -### サブグラフタブ +### Subgraphs Tab -「Subgraphs」タブをクリックすると、公開されているサブグラフが表示されます。 サブグラフは分散型ネットワークに公開されたときにのみ表示されます。 +If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. -![エクスプローラーイメージ 11](/img/Subgraphs-Overview.png) +![Explorer Image 11](/img/Subgraphs-Overview.png) -### インデックスタブ +### Indexing Tab -「Indexing」タブをクリックすると、サブグラフに対するすべてのアクティブな割り当てと過去の割り当てが表になっており、分析してインデクサーとしての過去のパフォーマンスを見ることができるチャートも表示されます。 +If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. -このセクションには、インデクサー報酬とクエリフィーの詳細も含まれます。 以下のような指標が表示されます: +This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: -- Delegated Stake - Delegator からのステークで、あなたが割り当て可能だが、スラッシュされないもの -- Total Query Fees - 提供したクエリに対してユーザーが支払った料金の合計額 -- Indexer Rewards - 受け取ったインデクサー報酬の総額(GRT) -- Fee Cut - デリゲーターとの分配時に保持するクエリフィーリベートの割合 -- Rewards Cut - デリゲーターとの分配時に保有するインデクサー報酬の割合 -- Owned - 預けているステークであり、悪質な行為や不正行為があった場合にスラッシュされる可能性がある +- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed +- Total Query Fees - the total fees that users have paid for queries served by you over time +- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT +- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators +- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators +- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior -![エクスプローラーイメージ 12](/img/Indexer-Stats.png) +![Explorer Image 12](/img/Indexer-Stats.png) -### デリゲーションタブ +### Delegating Tab -デリゲーターは、グラフネットワークにとって重要な存在です。 デリゲーターは知見を駆使して、健全な報酬を提供するインデクサーを選ばなければなりません。 このタブでは、アクティブなデリゲーションの詳細と過去の履歴、そしてデリゲートしたインデクサーの各指標を確認することができます。 +Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. -ページの前半には、自分のデリゲーションチャートと報酬のみのチャートが表示されています。 左側には、現在のデリゲーションメトリクスを反映した KPI が表示されています。 +In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. -このタブで見ることができるデリゲーターの指標は以下の通りです。 +The Delegator metrics you’ll see here in this tab include: -- デリゲーション報酬の合計 -- 未実現報酬の合計 -- 実現報酬の合計 +- Total delegation rewards +- Total unrealized rewards +- Total realized rewards -ページの後半には、デリゲーションテーブルがあります。 ここには、あなたがデリゲートしたインデクサーとその詳細(報酬のカットやクールダウンなど)が表示されています。 +In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). -テーブルの右側にあるボタンで、デリゲートを管理することができます。追加でデリゲートする、デリゲートを解除する、解凍期間後にデリゲートを取り消すなどの操作が可能です。 +With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. -表の右側にあるボタンで、デリゲーションを管理することができます。 +Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). -![エクスプローラーイメージ 13](/img/Delegation-Stats.png) +![Explorer Image 13](/img/Delegation-Stats.png) -### キュレーションタブ +### Curating Tab -「Curation」タブでは、自分がシグナリングしている(その結果、クエリフィーを受け取ることができる)サブグラフを確認することができます。 シグナリングにより、キュレーターはインデクサーに対して、どのサブグラフが価値があり信頼できるかを強調することができ、その結果、そのサブグラフにインデックスを付ける必要があることを示すことができます。 +In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. -このタブでは、以下の概要を見ることができます: +Within this tab, you’ll find an overview of: -- キュレーションしている全てのサブグラフとシグナルの詳細 -- サブグラフごとのシェアの合計 -- サブグラフごとのクエリ報酬 -- 更新日の詳細 +- All the subgraphs you're curating on with signal details +- Share totals per subgraph +- Query rewards per subgraph +- Updated at date details -![エクスプローラーイメージ 14](/img/Curation-Stats.png) +![Explorer Image 14](/img/Curation-Stats.png) -## プロフィールの設定 +## Your Profile Settings -ユーザープロフィールでは、個人的なプロフィールの詳細(ENS ネームの設定など)を管理することができます。 インデクサーの方は、さらに多くの設定が可能です。 ユーザープロファイルでは、デリゲーションパラメーターとオペレーターを設定することができます。 +Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. -- オペレーターは、インデクサーに代わって、割り当ての開始や終了など、プロトコル上の限定的なアクションを行います。 オペレーターは通常、ステーキングウォレットとは別の他の Ethereum アドレスで、インデクサーが個人的に設定できるネットワークへのゲート付きアクセス権を持っています。 -- 「Delegation parameters」では、自分とデリゲーターの間で GRT の分配をコントロールすることができます。 +- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set +- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. -![エクスプローラーイメージ 15](/img/Profile-Settings.png) +![Explorer Image 15](/img/Profile-Settings.png) -グラフエクスプローラーは、分散型データの世界への公式ポータルとして、ネットワーク内でのあなたの役割に関わらず、様々なアクションを取ることができます。 アドレスの横にあるドロップダウンメニューを開き、「Settings」ボタンをクリックすると、自分のプロフィール設定ができます。 +As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.
![Wallet details](/img/Wallet-Details.png)
From c729e9b5246ccee4e59aa30d5d10d1274a37ea42 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:58 -0500 Subject: [PATCH 118/241] New translations explorer.mdx (Korean) --- pages/ko/explorer.mdx | 212 +++++++++++++++++++++--------------------- 1 file changed, 106 insertions(+), 106 deletions(-) diff --git a/pages/ko/explorer.mdx b/pages/ko/explorer.mdx index 816139ae9a58..c8df28cfe03f 100644 --- a/pages/ko/explorer.mdx +++ b/pages/ko/explorer.mdx @@ -1,211 +1,211 @@ --- -title: 탐색기 +title: The Graph Explorer --- -그래프 탐색기, 혹은 우리가 흔히 부르는 것 처럼, 서브그래프와 네트워크 데이터의 세계로 향하는 탈중앙화 포탈에 오신것을 환영합니다! 그래프 탐색기는 다른 서브그래프 개발자, dapp 개발자, 큐레이터, 인덱서 및 위임자와 상호 작용할 수 있는 다양한 부분들로 구성됩니다. 그래프 탐색기에 대한 일반적인 개요를 알아보기 위해 아래의 비디오를 확인하세요. +Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below):
-## 서브그래프 +## Subgraphs -먼저, 여러분들이 막 여러분의 서브그래프 스튜디오에서 서브그래프를 배포 및 게시한 경우, 네비게이션 바 상단에 있는 서브그래프 탭은 분산형 네트워크에서 여러분들 소유의 완료된 서브그래프(및 다른 사람의 서브그래프)를 볼 수 있는 장소입니다. 여기에서 여러분들은 생성된 날짜, 신호 양 또는 이름을 기준으로 찾고 있는 정확한 서브그래프를 찾을 수 있습니다. +First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -여러분들이 서브그래프를 클릭하면, 플레이그라운드에서 쿼리를 테스트하고 네트워크 세부 정보를 활용하여 정보에 입각한 결정을 내릴 수 있습니다. 또한 여러분들은 자신의 서브그래프 또는 다른 사람의 서브그래프에 GRT 신호를 보내어, 인덱서가 그 중요성과 품질을 인식하도록 할 수도 있습니다. 이것은 서브그래프의 신호가 인덱싱되도록 인센티브를 부여하기 때문에 매우 중요합니다. 이는 결국 쿼리를 제공하기 위해 네트워크에 표시된다는 것을 의미합니다. +When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -각 서브그래프의 전용 페이지에는 몇 가지 세부 정보가 표시됩니다. 이러한 사항들이 포함되어 있습니다: +On each subgraph’s dedicated page, several details are surfaced. These include: -- 서브그래프 상의 시그널/언시그널 -- 차트, 현재 배포 ID 및 다른 메타데이터와 같은 더욱 자세한 정보 보기 -- 서브그래프의 과거 반복 과정을 탐색하기 위한 버전 전환 -- GraphQL을 통한 서브그래프 쿼리 -- 플레이그라운드에서의 서브그래프 테스트 -- 특정 서브그래프에 인덱싱하는 인덱서 보기 -- 서브그래프 상태 (할당, 큐레이터, 기타사항) -- 서브그래프를 게시한 엔티티 보기 +- Signal/Un-signal on subgraphs +- View more details such as charts, current deployment ID, and other metadata +- Switch versions to explore past iterations of the subgraph +- Query subgraphs via GraphQL +- Test subgraphs in the playground +- View the Indexers that are indexing on a certain subgraph +- Subgraph stats (allocations, Curators, etc) +- View the entity who published the subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## 참여자 +## Participants -이 탭에서는 인덱서, 위임자 및 큐레이터와 같이 네트워크 활동에 참여하는 모든 주체들을 조감도로 볼 수 있습니다. 아래에서, 저희는 여러분들을 위해 각 탭이 의미하는 바가 무엇인지 자세히 살펴보겠습니다. +Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. -### 1. 인덱서 +### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -인덱서부터 시작해보도록 하겠습니다. 인덱서는 프로토콜의 백본으로, 이들은 서브그래프에 스테이킹 및 인덱싱을 수행하고, 서브그래프를 사용하는 모든 사람에게 쿼리를 제공합니다. 인덱서 테이블에서 여러분들은 인덱서의 위임 매개변수, 그들의 스테이킹, 각 서브그래프에 대한 스테이킹, 쿼리 수수료 및 인덱싱 보상으로 얻은 수익을 볼 수 있습니다. 좀 더 심청적인 내용은 아래와 같습니다: +Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: -- Query Fee Cut - 위임자들과 쿼리 피를 나눌 때, 인덱서가 가져가는 쿼리 수수료의 리베이트 비율 -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. 이 항목이 양수이면, 이는 그 인덱서가 그들의 보상의 일부분을 수취함을 있음을 의미합니다. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - 인덱서가 위의 위임 매개변수를 변경할 수 있을 때까지 남은 시간입니다. Cooldown 기간은 인덱서가 그들의 위임 매개변수들을 업데이트 할 때 인덱서에 의해 설정됩니다. -- Owned - 이것은 인덱서의 예치된 스테이킹 내역이며, 악의적이거나 잘못된 행동으로 인해 슬래싱 패널티를 받을 수 있습니다. -- Delegated - 인덱서에 의해 할당될 수는 있지만, 슬래싱 패널티는 받을 수 없는 위임자들의 스테이킹 지분입니다. -- Allocated - 인덱서들이 그들이 인덱싱하는 서브그래프에 적극적으로 할당하는 스테이킹 지분입니다. -- Available Delegation Capacity - 인덱서가 위임 수용력 이상으로 과도하게 위임받기 전, 인덱서들이 여전히 받을 수 위임 스테이킹 수량입니다. -- Max Delegation Capacity - 인덱서가 생산적으로 수용할 수 있는 지분 위임 최대 수량입니다. 이를 초과하여 위임받은 지분들의 경우, 할당 혹은 보상 계산에 사용될 수 업습니다. -- Query Fees - 이는 최종 사용자들이 모든 시간 동안 인덱서들의 쿼리들에 대하여 지불해야하는 총 수수료입니다. -- Indexer Rewards - 이는 모든 시간 동안 인덱서 및 그들의 위임자들이 창출하는 총 인덱서 보상입니다. 인덱서 보상은 GRT 발행을 통해 지급됩니다. +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated +- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -인덱서들은 쿼리 수수료와 인덱싱 보상을 모두 얻을 수 있습니다. 기능적으로, 이는 네트워크 참가자가 GRT를 인덱서에 위임할 때 발생합니다. 이를 통해 인덱서는 인덱서 매개변수에 따라 쿼리 수수료와 보상을 받을 수 있습니다. 인덱싱 매개변수는 테이블의 오른쪽을 클릭하거나 인덱서의 프로필로 이동하여 "Delegate" 버튼을 클릭하여 설정합니다. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. -인덱서가 되는 방법에 대해 더 자세히 알아보고 싶으신 분들은, [official documentation](/indexing) 혹은 [The Graph Academy Indexer guides](https://thegraph.academy/delegators/choosing-indexers/)를 확인해보시길 바랍니다. +To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) ![Indexing details pane](/img/Indexing-Details-Pane.png) -### 2. 큐레이터 +### 2. Curators -큐레이터는 서브그래프들을 분석하여 어떤 서브그래프가 최고 품질의 서브그래프인지를 식별합니다. 일단 큐레이터가 잠재적으로 매력적인 서브그래프를 발견하면, 그들은 본딩 커브에 신호를 보내서 그것을 큐레이션 할 수 있습니다. 이를 통해 큐레이터는 인덱서에게 어떤 서브래프가 고품질이고, 인덱싱 되어야 하는지를 알려줍니다. +Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. -큐레이터는 커뮤니티 구성원, 데이터 소비자, 혹은 심지어 GRT 토큰을 본딩 커브에 넣음으로써 자신의 서브그래프에 신호를 보내는 서브그래프 개발자가 될 수 있습니다. GRT를 예치함으로써 큐레이터는 서브그래프의 큐레이션 쉐어를 발행합니다. 결과적으로 큐레이터는 그들이 신호한 서브그래프가 생성하는 쿼리 수수료의 일부를 얻을 수 있습니다. 본딩 커브는 큐레이터가 최고 품질의 데이터 소스를 큐레이션하도록 동기부여를 합니다. 이 섹션의 큐레이터 테이블에서 다음 사항들을 확인할 수 있습니다. +Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: -- 큐레이터가 큐레이팅을 시작한 날 -- 예치된 GRT의 수 -- 큐레이터가 소유한 쉐어 수 +- The date the Curator started curating +- The number of GRT that was deposited +- The number of shares a Curator owns ![Explorer Image 6](/img/Curation-Overview.png) -만약, 여러분들이 큐레이터의 역할에 대해 더 알고 싶으시다면, [The Graph Academy](https://thegraph.academy/curators/) 혹은 [official documentation](/curating) 링크를 클릭하셔서 더욱 자세히 살펴보시기 바랍니다. +If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) -### 3. 위임자 +### 3. Delegators -위임자는 더그래프 네트워크의 보안 및 분산화 유지에 중요한 역할을 수행합니다. 이들은 하나 이상의 인덱서에 GRT 토큰을 위임(즉, "스테이킹")하여 네트워크에 참여합니다. 위임자 없이는, 인덱서가 많은 양의 보상과 수수료를 받을 가능성이 줄어듭니다. 따라서 인덱서들은 인덱싱 보상 및 쿼리 수수료의 일부를 위임자들에게 제공하는 정책을 통해 위임자들을 유치합니다. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. -반면에, 위임자들은 과거 성과, 인덱싱 보상률, query fee cuts 등 다양한 변수들을 기준으로 인덱서를 선택합니다. 커뮤니티 내에서의 명성 또한 이에 한 요소로 작용할 수 있습니다. [더그래프 디스코드](https://thegraph.com/discord) 혹은 [더그래프 포럼](https://forum.thegraph.com/)을 통해 인덱서들과 소통하시길 추천드립니다! +Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -위임자 테이블에서는 커뮤니티 내의 활성 위임자들 및 다음과 같은 메트릭스를 볼 수 있습니다. +The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: -- 어떠한 위임자가 위임을 시행하고 있는 인덱서들의 수 -- 어떠한 위임자의 본 위임 -- 그들이 축적하였지만, 프로토콜로부터 인출하지 않은 보상들 -- 그들이 프로토콜로부터 인출하여 실현된 보상들 -- 그들이 현재 프로토콜 상에 보유하고 있는 GRT의 총 수량 -- 그들이 마지막으로 위임 행위를 한 날짜 +- The number of Indexers a Delegator is delegating towards +- A Delegator’s original delegation +- The rewards they have accumulated but have not withdrawn from the protocol +- The realized rewards they withdrew from the protocol +- Total amount of GRT they have currently in the protocol +- The date they last delegated at -위임자가 되는 방법에 대해 더 알고 싶으시다면, 더 둘러보실 필요 없습니다! 여러분들이 지금 하셔야 할 일은 [official documentation](/delegating) 혹은 [The Graph Academy](https://docs.thegraph.academy/network/delegators)에 방문 하는 것입니다! +If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). -## 네트워크 +## Network -네트워크 섹션에서 여러분들은 에폭을 기준으로 전환하는 전환하는 능력 뿐만 아니라, 글로벌 KPI 및 네트워크 메트릭을 보다 자세히 분석할 수 있는 기능을 보실 수 있습니다. 이러한 세부 정보를 통해 시간이 지남에 따라 네트워크가 어떻게 작동하는지 알 수 있습니다. +In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### 활동 +### Activity -활동 섹션에는 모든 현재 네트워크 메트릭스와 시간에 따른 일부 누적 메트릭이 있습니다. 여기서 여러분들은 다음과 같은 사항들을 볼 수 있습니다. +The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: -- 현재 네트워크 스테이킹 총량 -- 인덱서와 그들의 위임자 사이의 스테이킹 분할 내역 -- 네트워크 시작 이후 GRT의 총 공급량, 발행량 및 소각량 -- 프로토콜 시작 이후 총 인덱싱 보상들 -- 보상, 인플레이션 비율 등과 같은 프로토콜 파라미터 -- 현재 에폭 보상 및 수수료들 +- The current total network stake +- The stake split between the Indexers and their Delegators +- Total supply, minted, and burned GRT since the network inception +- Total Indexing rewards since the inception of the protocol +- Protocol parameters such as curation reward, inflation rate, and more +- Current epoch rewards and fees -언급할만한 가치가 있는 몇 가지 주요 세부정보 : +A few key details that are worth mentioning: -- **쿼리 수수료는 소비자들에 의해 생성된 수수료들을 나타냅니다.** 그리고 이들은 해당 서브그래프에 대한 인덱서들의 할당이 종료되고 소비자가 제공한 데이터들이 검증된 다음, 최소 7 에폭의 기간이 지난 이후에 인덱서들에 의해 클레임(혹은 클레임 불가)될 수 있습니다. -- **인덱싱 보상은 해당 에폭 동안 네트워크 발행으로부터 인덱서가 청구한 보상 금액을 나타냅니다.** 프로토콜 발행은 고정되어 있더라도, 해당 보상은 인덱서가 인덱싱 중인 서브그래프에 대한 할당을 닫아야지만 발행됩니다. 따라서, 에폭 마다 보상 횟수는 다양합니다(예: 일부 에폭 동안에, 인덱서는 며칠 동안 열려 있던 할당을 일괄적으로 닫았을 수 있습니다). +- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) -### 에폭(Epochs) +### Epochs In the Epochs section you can analyse on a per-epoch basis, metrics such as: -- 에폭 시작 혹은 종료 블록 -- 특정 에포크 동안 생성된 쿼리 수수료 및 인덱싱 보상 -- 에폭 상태(Epoch status)는 다음과 같은 다양한 상태를 가질 수 있는 쿼리 수수료 수집 및 분배를 나타냅니다. - - 활성 에폭(The active epoch)은 현재 인덱서가 지분을 할당 및 쿼리 수수료 수집을 진행하고 있는 에폭입니다. - - 결산 에폭(The settling epochs)은 상태 채널이 결산되고 있는 에폭입니다. 이는 소비자가 인덱서를 상대로 분쟁을 제기하는 경우, 해당 인덱서는 슬래싱 패널티를 받을 수 있음을 의미합니다. - - 분배 에폭(The distributing epochs)은 해당 에폭들에 대한 상태 채널이 정산되고 인덱서가 쿼리 수수료 리베이트를 청구할 수 있는 에폭들입니다. +- Epoch start or end block +- Query fees generated and indexing rewards collected during a specific epoch +- Epoch status, which refers to the query fee collection and distribution and can have different states: + - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees + - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. + - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. ![Explorer Image 9](/img/Epoch-Stats.png) -## 여러분들의 사용자 프로필 +## Your User Profile -저희는 네트워크 통계에 대해 이야기했으므로, 이제 개인 프로필로 넘어가 보도록 하겠습니다. 여러분들이 네트워크에 참여하는 방식에 관계없이, 여러분들의 개인 프로필은 여러분들의 네트워크 활동을 볼 수 있는 영역입니다. 여러분들의 이더리움 지갑이 사용자 프로필 역할을 하며, 여러분들은 사용자 대시보드를 통해 다음 사항들을 확인 가능합니다 : +Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: -### 프로필 개요 +### Profile Overview -이곳에서 여러분들은 이전에 수행한 현황을 확인할 수 있습니다. 이곳에서 여러분들은 프로필 정보, 설명 및 웹사이트(추가한 경우) 또한 찾으실 수 있습니다. +This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) -### 서브그래프 탭 +### Subgraphs Tab -서브그래프 탭을 클릭하면 배포된 서브그래프들이 표시됩니다. 여기에는 테스트 목적으로 CLI와 함께 배포된 서브그래프는 포함되지 않습니다. - 서브그래프는 탈중앙화 네트워크에 배포될 때만 표시됩니다. +If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) -### 인덱싱 탭 +### Indexing Tab -만약 여러분들이 인덱싱 탭을 클릭하면, 서브그래프에 대한 모든 활성 및 과거 할당 내역들을 볼 수 있는 테이블이 존재하며, 인덱서로서 여러분들의 과거 성과를 분석하고 볼 수 있는 차트 또한 찾을 수 있습니다. +If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. -이 섹션에는 순 인덱서 보상 및 순 쿼리 수수료에 대한 세부 정보도 포함됩니다. 여러분들은 다음의 메트릭스들을 확인 가능합니다. +This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: -- Delegated Stake - 여러분들에 의해 할당될 수는 있지만, 슬래싱 패널티는 받지 않는 위임자의 지분 -- Total Query Fees - 시간이 지남에 따라 여러분이 제공한 쿼리에 대해 사용자가 지불한 총 수수료 -- Indexer Rewards - 여러분들이 GRT 로 받은 인덱서 보상의 총 수량 -- Fee Cut - 여러분들이 쿼리 수수료를 위임자들과 나눌 때, 여러분들이 취하는 쿼리 수수료의 비율(%) -- Rewards Cut - 여러분들이 인덱서 수수료를 위임자들과 나눌 때, 여러분들이 취하는 인덱서 보상의 비율(%) -- Owned - 악의적인 행동이나 잘못된 행동으로 인해 삭감 패널티를 받을 수 있는 여러분들이 예치한 스테이킹 수량 +- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed +- Total Query Fees - the total fees that users have paid for queries served by you over time +- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT +- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators +- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators +- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior ![Explorer Image 12](/img/Indexer-Stats.png) -### 위임 탭 +### Delegating Tab -위임자들은 더그래프 네트워크에 매우 중요합니다. 위임자는 자신의 지식을 사용하여 정상적인 보상 수익을 제공할 인덱서를 선택해야 합니다. 여기서 여러분들은 활성 및 과거 위임의 세부 정보를 찾을 수 있으며, 동시에 위임한 인덱서의 메트릭스를 확인할 수 있습니다. +Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. -페이지의 처음 절반은 위임 차트와 보상 전용 차트가 표시됩니다. 왼쪽에는 여러분의 현재 위임 메트릭스를 반영하는 KPI가 표시됩니다. +In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. -이 탭에서 볼 수 있는 위임자 메트릭스는 다음과 같습니다 : +The Delegator metrics you’ll see here in this tab include: -- 총 위임 보상 -- 총 미실현 보상 -- 총 실현 보상 +- Total delegation rewards +- Total unrealized rewards +- Total realized rewards -페이지 후반에는 여러분의 위임 표가 존재합니다. 여기서 여러분들은 위임한 인덱서와 해당 세부 정보(예: rewards cuts, 재사용 대기 시간 등)를 볼 수 있습니다. +In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. -표의 오른쪽에 있는 버튼을 사용하여 여러분들의 위임을 관리할 수 있습니다(추가 위임, 위임 취소 혹은 해빙 기간 이후 위임에 대한 출금). +Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). ![Explorer Image 13](/img/Delegation-Stats.png) -### 큐레이팅 탭 +### Curating Tab -큐레이션 탭에서 여러분들은 여러분들이 신호를 보내고 있는 모든 서브그래프들을 찾을 수 있습니다.(이로인해 여러분들은 쿼리 수수료를 받을 수 있습니다.) 시그널링을 통해 큐레이터는 인덱서들에게 어떤 서브그래프가 가치 있고 신뢰할 수 있는지를 강조할 수 있으므로, 이들이 인덱싱 되어야 한다는 신호를 보낼 수 있게됩니다. +In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. -이 탭 내에서 여러분들은 다음 사항들의 개요를 확인할 수 있습니다 : +Within this tab, you’ll find an overview of: -- 신호 명세사항들과 함께 여러분이 신호를 보내고 있는 모든 서브그래프들 -- 서브그래프 별 총 쉐어 -- 서브그래프 별 쿼리 보상들 -- 날짜 세부정보에 대한 업데이트 내역 +- All the subgraphs you're curating on with signal details +- Share totals per subgraph +- Query rewards per subgraph +- Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) -## 프로필 설정 +## Your Profile Settings -사용자 프로필 내에서, 개인 프로필 세부 정보(예: ENS 네임 설정)를 관리할 수 있습니다. 만약 여러분들이 인덱서라면, 간편하게 설정에 접근할 수 있습니다. 여러분들의 유저 프로필 내에서, 여러분들은 여러분들의 위임 매개변수 및 운영자 설정을 할 수 있습니다. +Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. -- 운영자는 프로토콜에서 인덱서를 대신하여 할당 열기 및 닫기와 같은 제한된 작업을 수행합니다. 운영자는 일반적으로 인덱서가 개인적으로 설정할 수 있는, 네트워크에 대한 게이트 액세스가 되어있는, 스테이킹 지갑과는 별도의 다른 이더리움 주소입니다. -- 위임 매개변수를 사용하면 여러분들과 여러분들의 위임자 간의 GRT 분배를 제어할 수 있습니다. +- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set +- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. ![Explorer Image 15](/img/Profile-Settings.png) -탈중앙화된 데이터의 세계로 향하는 공식 포털인 더그래프 탐색기를 사용하면, 네트워크에서의 역할에 상관없이 다양한 행위가 가능합니다. 여러분들의 주소 옆에 있는 드롭다운 메뉴를 연 다음, 설정 버튼을 클릭하면 프로필 설정으로 이동할 수 있습니다. +As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. -
Wallet details
+
![Wallet details](/img/Wallet-Details.png)
From 2c78d44a0f9e7f859e974bb9f0f53f2eb858e623 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:56:59 -0500 Subject: [PATCH 119/241] New translations explorer.mdx (Chinese Simplified) --- pages/zh/explorer.mdx | 216 +++++++++++++++++++++--------------------- 1 file changed, 108 insertions(+), 108 deletions(-) diff --git a/pages/zh/explorer.mdx b/pages/zh/explorer.mdx index 85698d600f9e..c8df28cfe03f 100644 --- a/pages/zh/explorer.mdx +++ b/pages/zh/explorer.mdx @@ -1,8 +1,8 @@ --- -title: 浏览器 +title: The Graph Explorer --- -欢迎使用 Graph 浏览器,或者我们可以称它为您进入子图和网络数据世界的去中心化门户。 The Graph 浏览器由多个部分组成,您可以在其中与其他子图开发人员、去中心化应用开发人员、策展人、索引人和 委托人进行交互。 有关 Graph 浏览器的通用概述,请查看下面的视频(或继续阅读下面的内容): +Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below):
-## 子图 +## Subgraphs -首先,如果您刚刚在 子图工作室中完成部署和发布您的子图,导航栏顶部的 子图选项卡是您在去中心化网络上查看您自己完成的子图(以及其他人的子图)的地方。 在这里,您将能够根据创建日期、信号量或名称找到您正在寻找的确切子图。 +First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -当您单击子图时,您将能够在面板上测试查询,并能够利用网络详细信息做出明智的决策。 您还可以在您自己的子图或其他人的子图中发出 GRT 信号,以使索引人意识到其重要性和质量。 这很关键,因为子图上的信号会激励它被索引,这意味着它将出现在网络上,最终为查询提供服务。 +When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -在每个子图的专用页面上,会显示一些详细信息。 这些包括: +On each subgraph’s dedicated page, several details are surfaced. These include: -- 子图上的信号/非信号 -- 查看更多详细信息,例如图表、当前部署 ID 和其他元数据 -- 切换版本以探索子图的过去迭代版本 -- 通过 GraphQL 查询子图 -- 在面板上测试子图 -- 查看在某个子图上建立索引的索引人 -- 子图统计信息(分配、策展人等) -- 查看发布子图的实体 +- Signal/Un-signal on subgraphs +- View more details such as charts, current deployment ID, and other metadata +- Switch versions to explore past iterations of the subgraph +- Query subgraphs via GraphQL +- Test subgraphs in the playground +- View the Indexers that are indexing on a certain subgraph +- Subgraph stats (allocations, Curators, etc) +- View the entity who published the subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## 参与者 +## Participants -在此选项卡中,您可以鸟瞰所有参与网络活动的人员,例如索引人、委托人和策展人。 下面,我们将深入了解每个选项卡对您的意义。 +Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. -### 1. 索引人 +### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -让我们从索引人开始。 索引人是协议的骨干,是那些质押于子图、索引它们并向使用子图的任何人提供查询服务的人。 在 索引人表中,您将能够看到 索引人的委托参数、他们的权益、他们对每个子图的权益以及他们从查询费用和索引奖励中获得的收入。 细则如下: +Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: -- 查询费用削减 - 索引人与委托人拆分时保留的查询费用回扣的百分比 -- 有效的奖励削减 - 应用于委托池的索引奖励削减。 如果它是负数,则意味着索引人正在赠送部分奖励。 如果是正数,则意味着 索引人保留了他们的一些奖励 -- 冷却时间剩余 - 索引人可以更改上述委托参数之前的剩余时间。 冷却时间由索引人在更新其委托参数时设置 -- 已拥有 - 这是索引人的存入股份,可能会因恶意或不正确的行为而被削减 -- 已委托 - 委托人的股权可以由索引人分配,但不能被削减 -- 已分配 - 索引人积极分配给他们正在索引的子图的股权 -- 可用委托容量 - 索引人在过度委托之前仍然可以收到的委托权益数量 -- 最大委托容量 - 索引人可以有效接受的最大委托权益数量。 超出的委托权益不能用于分配或奖励计算。 -- 查询费用 - 这是最终用户一直以来为来自索引人的查询支付的总费用 -- 索引人奖励 - 这是索引人及其委托人在所有时间获得的总索引人奖励。 索引人奖励通过 GRT 发行支付。 +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated +- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -索引人可以获得查询费用和索引奖励。 从功能上讲,当网络参与者将 GRT 委托给索引人时,就会发生这种情况。 这使索引人能够根据其索引人参数接收查询费用和奖励。 索引参数可以通过点击表格的右侧来设置,或者通过进入索引人的配置文件并点击“委托”按钮来设置。 +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. -如果您想了解有关 Curator 角色的更多信息,可以通过访问 [The Graph Academy](https://thegraph.academy/curators/)或者 [官方文档](/curating)来实现。 +To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) ![Indexing details pane](/img/Indexing-Details-Pane.png) -### 2. 策展人 +### 2. Curators -策展人分析子图以确定哪些子图质量最高。 一旦策展人发现了一个潜在有吸引力的子图,他们就可以通过在其粘合曲线上发出信号来策展它。 在这样做时,策展人让索引人知道哪些子图是高质量的并且应该被索引。 +Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. -策展人可以是社区成员、数据消费者,甚至是子图开发者,他们通过将 GRT 代币存入粘合曲线来在自己的子图上发出信号。 通过存入 GRT,策展人铸造了子图的策展份额。 因此,策展人有资格获得他们发出信号的子图生成的一部分查询费用。 粘合曲线激励策展人策展最高质量的数据源。 本节中的 策展人表将允许您查看: +Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: -- 策展人开始策展的日期 -- 已存入的 GRT 数量 -- 策展人拥有的股份数量 +- The date the Curator started curating +- The number of GRT that was deposited +- The number of shares a Curator owns ![Explorer Image 6](/img/Curation-Overview.png) -如果你想了解更多关于策展人角色的信息,你可以通过访问 [The Graph Academy](https://thegraph.academy/curators/) 的以下链接或[官方文档](/curating)来实现。 +If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) -### 3. 委托人 +### 3. Delegators -委托人在维护 The Graph 网络的安全性和去中心化方面发挥着关键作用。 他们通过将 GRT 代币委托给一个或多个索引人(即“质押”)来参与网络。 如果没有委托人,索引人不太可能获得可观的奖励和费用。 因此,索引人试图通过向委托人提供他们获得的一部分索引奖励和查询费用来吸引委托人。 +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. -委托人反过来根据许多不同的变量选择索引人,例如过去的表现、索引奖励率和查询费用削减。 社区内的声誉也可以起到一定的作用! 建议连接通过[The Graph’s Discord](https://thegraph.com/discord) 或者 [The Graph 论坛](https://forum.thegraph.com/)选择索引人! +Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -委托人表将允许您查看社区中的活跃委托人,以及以下指标: +The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: -- 委托人委托给的索引人数量 -- 委托人的原始委托 -- 他们已经积累但没有退出协议的奖励 -- 他们从协议中撤回的已实现奖励 -- 他们目前在协议中的 GRT 总量 -- 他们上次授权的日期 +- The number of Indexers a Delegator is delegating towards +- A Delegator’s original delegation +- The rewards they have accumulated but have not withdrawn from the protocol +- The realized rewards they withdrew from the protocol +- Total amount of GRT they have currently in the protocol +- The date they last delegated at -如果您想了解更多有关如何成为委托人的信息,请不要再犹豫了! 您所要做的就是前往 [官方文档](/delegating) 或者 [The Graph Academy](https://docs.thegraph.academy/network/delegators). +If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). -## 网络 +## Network -在网络部分,您将看到全局 KPI 以及切换到每个时期的基础和更详细地分析网络指标的能力。 这些详细信息将让您了解网络随时间推移的表现。 +In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### 活动 +### Activity -活动部分包含所有当前网络指标以及一些随时间累积的指标。 在这里,您可以看到以下内容: +The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: -- 当前网络总质押量 -- 索引人和他们的委托人之间的股份分配 -- 自网络成立以来的总供应量、铸造和燃烧的 GRT -- 自协议成立以来的总索引奖励 -- 协议参数,例如管理奖励、通货膨胀率等 -- 当前时期奖励和费用 +- The current total network stake +- The stake split between the Indexers and their Delegators +- Total supply, minted, and burned GRT since the network inception +- Total Indexing rewards since the inception of the protocol +- Protocol parameters such as curation reward, inflation rate, and more +- Current epoch rewards and fees -一些值得一提的关键细节: +A few key details that are worth mentioning: -- **查询费用代表消费者产生的费用,**在他们对子图的分配已经关闭并且他们提供的数据已经被关闭后,在至少 7 个周期(见下文)之后,索引人可以要求(或不要求)它们得到消费者的认可。 -- **索引奖励代表索引人在该时期从网络发行中索取的奖励数量。 ** 尽管协议发布是固定的,但只有当索引人关闭对他们一直在索引的子图的分配时才会产生奖励。 因此,每个时期的奖励数量是不同的(即,在某些时期,索引人可能会集体关闭已开放多天的分配)。 +- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) -### 时期 +### Epochs -在 时期部分,您可以在每个 时期的基础上分析指标,例如: +In the Epochs section you can analyse on a per-epoch basis, metrics such as: -- 时期开始或结束块 -- 在特定时期产生的查询费用和索引奖励 -- 时期状态,指的是查询费用的收取和分配,可以有不同的状态: - - 活跃时期是索引人目前正在分配权益并收取查询费用的时期 - - 稳定时期是状态通道正在稳定的时期。 这意味着如果消费者对他们提出争议,索引人将受到严厉惩罚。 - - 分发 时期是 时期的状态通道正在结算的 时期,索引人可以要求他们的查询费用回扣。 - - 最终确定的时期是索引人没有留下查询费回扣的时期,因此被最终确定。 +- Epoch start or end block +- Query fees generated and indexing rewards collected during a specific epoch +- Epoch status, which refers to the query fee collection and distribution and can have different states: + - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees + - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. + - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. ![Explorer Image 9](/img/Epoch-Stats.png) -## 您的用户资料 +## Your User Profile -既然我们已经讨论了网络统计信息,让我们继续讨论您的个人资料。 无论您以何种方式参与网络,您的个人资料都是您查看网络活动的地方。 您的以太坊钱包将作为您的用户资料,通过用户仪表板,您将能够看到: +Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: -### 个人资料概览 +### Profile Overview -您可以在此处查看您当前采取的任何操作。 您也可以在这里找到您的个人资料信息、描述和网站(如果您添加了)。 +This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) -### 子图标签 +### Subgraphs Tab -如果单击子图选项卡,您将看到已发布的子图。 这将不包括为测试目的使用 CLI 部署的任何子图——子图只会在它们发布到去中心化网络时显示。 +If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) -### 索引标签 +### Indexing Tab -如果您单击“索引”选项卡,您将找到一个表格,其中包含对子图的所有活动和历史分配,以及您可以分析和查看过去作为索引人的表现的图表。 +If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. -本节还将包括有关您的净索引人奖励和净查询费用的详细信息。 您将看到以下指标: +This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: -- 已委托股份 - 委托人的股份,您可以分配但不能被削减 -- 总查询费用 - 用户在一段时间内为您提供的查询支付的总费用 -- 索引人奖励- 您收到的 索引人奖励总额,以 GRT 为单位 -- 费用削减 - 当您与委托人拆分时,您将保留的查询费用回扣百分比 -- 奖励削减 - 与委托人拆分时您将保留的索引人奖励的百分比 -- 已拥有 - 您存入的股份,可能会因恶意或不正确的行为而被削减 +- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed +- Total Query Fees - the total fees that users have paid for queries served by you over time +- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT +- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators +- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators +- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior ![Explorer Image 12](/img/Indexer-Stats.png) -### 委托标签 +### Delegating Tab -委托人对 The Graph 网络很重要。 委托人必须利用他们的知识来选择能够提供健康回报的索引人。 在这里,您可以找到您的活动和历史委托的详细信息,以及您委托给的索引人的指标。 +Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. -在页面的前半部分,您可以看到您的委托图表,以及仅奖励图表。 在左侧,您可以看到反映您当前委托指标的 KPI。 +In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. -您将在此选项卡中看到的委托人指标包括: +The Delegator metrics you’ll see here in this tab include: -- 总委托奖励 -- 未实现的总奖励 -- 已实现的总奖励 +- Total delegation rewards +- Total unrealized rewards +- Total realized rewards -在页面的后半部分,您将看到委托标签。 在这里,您可以看到您委托给的索引人,以及它们的详细信息(例如奖励削减、冷却时间等)。 +In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). -通过表格右侧的按钮,你可以管理你的委托--更多的委托,取消委托,或在解冻期后撤回你的委托。 +With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. -使用表格右侧的按钮,您可以管理您的委托——在解冻期后增加委托、取消委托或撤回委托。 +Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). ![Explorer Image 13](/img/Delegation-Stats.png) -### 策展标签 +### Curating Tab -在 策展选项卡中,您将找到您正在发送信号的所有子图(从而使您能够接收查询费用)。 信号允许策展人向索引人突出显示哪些子图有价值和值得信赖,从而表明它们需要被索引。 +In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. -在此选项卡中,您将找到以下内容的概述: +Within this tab, you’ll find an overview of: -- 您正在策展的所有带有信号细节的子图 -- 每个子图的共享总数 -- 查询每个子图的奖励 -- 更新日期详情 +- All the subgraphs you're curating on with signal details +- Share totals per subgraph +- Query rewards per subgraph +- Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) -## 设置您的个人资料 +## Your Profile Settings -在您的用户配置文件中,您将能够管理您的个人配置文件详细信息(例如设置 ENS 名称)。 如果您是 索引人,则可以轻松访问更多设置。 在您的用户配置文件中,您将能够设置您的委托参数和操作员。 +Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. -- 操作员代表索引人在协议中采取有限的操作,例如打开和关闭分配。 操作员通常是其他以太坊地址,与他们的抵押钱包分开,可以访问 索引人可以亲自设置的网络 -- 委托参数允许您控制 GRT 在您和您的委托人之间的分配。 +- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set +- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. ![Explorer Image 15](/img/Profile-Settings.png) -作为您进入去中心化数据世界的官方门户,无论您在网络中的角色如何,G​​raph 浏览器都允许您采取各种行动。 您可以通过打开地址旁边的下拉菜单进入您的个人资料设置,然后单击“设置”按钮。 +As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. -
Wallet details
+
![Wallet details](/img/Wallet-Details.png)
From ca70c53f90d736369c11a0f115ca1af00d88a472 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 19:57:00 -0500 Subject: [PATCH 120/241] New translations index.json (Vietnamese) --- pages/vi/index.json | 78 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/pages/vi/index.json b/pages/vi/index.json index 0967ef424bce..d8244f44217f 100644 --- a/pages/vi/index.json +++ b/pages/vi/index.json @@ -1 +1,77 @@ -{} +{ + "title": "Get Started", + "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", + "shortcuts": { + "aboutTheGraph": { + "title": "About The Graph", + "description": "Tìm hiểu thêm về The Graph" + }, + "quickStart": { + "title": "Quick Start", + "description": "Jump in and start with The Graph" + }, + "developerFaqs": { + "title": "Developer FAQs", + "description": "Frequently asked questions" + }, + "queryFromAnApplication": { + "title": "Query from an Application", + "description": "Learn to query from an application" + }, + "createASubgraph": { + "title": "Create a Subgraph", + "description": "Use Studio to create subgraphs" + }, + "migrateFromHostedService": { + "title": "Migrate from Hosted Service", + "description": "Migrating subgraphs to The Graph Network" + } + }, + "networkRoles": { + "title": "Network Roles", + "description": "Learn about The Graph’s network roles.", + "roles": { + "developer": { + "title": "Nhà phát triển", + "description": "Create a subgraph or use existing subgraphs in a dapp" + }, + "indexer": { + "title": "Indexer", + "description": "Vận hành một nút để lập chỉ mục dữ liệu và phục vụ các truy vấn" + }, + "curator": { + "title": "Curator", + "description": "Tổ chức dữ liệu bằng cách báo hiệu trên các subgraph" + }, + "delegator": { + "title": "Delegator", + "description": "Bảo mật mạng bằng cách ủy quyền GRT cho Indexers" + } + } + }, + "readMore": "Read more", + "products": { + "title": "Các sản phẩm", + "products": { + "subgraphStudio": { + "title": "Subgraph Studio", + "description": "Create, manage and publish subgraphs and API keys" + }, + "graphExplorer": { + "title": "Trình khám phá Graph", + "description": "Explore subgraphs and interact with the protocol" + }, + "hostedService": { + "title": "Hosted Service", + "description": "Create and explore subgraphs on the Hosted Service" + } + } + }, + "supportedNetworks": { + "title": "Mạng lưới được hỗ trợ", + "description": "The Graph supports the following networks on The Graph Network and the Hosted Service.", + "graphNetworkAndHostedService": "The Graph Network & Hosted Service", + "hostedService": "Hosted Service", + "betaWarning": "Network is in beta. Use with caution." + } +} From 38f618b88c43fa324b5b1f4a8cf73a43eb0c135a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:19 -0500 Subject: [PATCH 121/241] New translations introduction.mdx (Spanish) --- pages/es/about/introduction.mdx | 48 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/es/about/introduction.mdx b/pages/es/about/introduction.mdx index 5f840c040400..70290d8c3649 100644 --- a/pages/es/about/introduction.mdx +++ b/pages/es/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: Introduction +title: Introducción --- -This page will explain what The Graph is and how you can get started. +En esta página se explica qué es The Graph y cómo puedes empezar a utilizarlo. -## What The Graph Is +## ¿Qué es The Graph? -The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. +The Graph es un protocolo descentralizado que permite indexar y consultar los datos de diferentes blockchains, el cual empezó por Ethereum. Permite consultar datos los cuales pueden ser difíciles de consultar directamente. -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +Los proyectos con contratos inteligentes complejos como [Uniswap](https://uniswap.org/) y las iniciativas de NFTs como [Bored Ape Yacht Club](https://boredapeyachtclub.com/) almacenan los datos en la blockchain de Ethereum, lo que hace realmente difícil leer algo más que los datos básicos directamente desde la blockchain. -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +En el caso de Bored Ape Yacht Club, podemos realizar operaciones de lecturas básicas en [su contrato](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code), para obtener el propietario de un determinado Ape, obtener el URI de un Ape en base a su ID, o el supply total, ya que estas operaciones de lectura están programadas directamente en el contrato inteligente, pero no son posibles las consultas y operaciones más avanzadas del mundo real como la adición, consultas, las relaciones y el filtrado no trivial. Por ejemplo, si quisiéramos consultar los Apes que son propiedad de una dirección en concreto, y filtrar por una de sus características, no podríamos obtener esa información interactuando directamente con el contrato. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. +Para obtener estos datos, tendríamos que procesar cada uno de los eventos de [`transferencia`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) que se hayan emitido, leer los metadatos de IPFS utilizando el ID del token y el hash del IPFS, con el fin de luego agregarlos. Incluso para este tipo de preguntas relativamente sencillas, una aplicación descentralizada (dapp) que se ejecutara en un navegador tardaría **horas o incluso días** en obtener una respuesta. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +También podrías construir tu propio servidor, procesar las transacciones allí, guardarlas en una base de datos y construir un endpoint de la API sobre todo ello para consultar los datos. Sin embargo, esta opción requiere recursos intensivos, necesita mantenimiento, y si llegase a presentar algún tipo de fallo podría incluso vulnerar algunos protocolos de seguridad que son necesarios para la descentralización. -**Indexing blockchain data is really, really hard.** +**Indexar los datos de la blockchain es muy, muy difícil.** -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. +Las propiedades de la blockchain, su finalidad, la reorganización de la cadena o los bloques que están por cerrarse, complican aún más este proceso y hacen que no solo se consuma tiempo, sino que sea conceptualmente difícil recuperar los resultados correctos proporcionados por la blockchain. -The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +The Graph resuelve esto con un protocolo descentralizado que indexa y permite una consulta eficiente y de alto rendimiento para recibir los datos de la blockchain. Estas APIs ("subgrafos" indexados) pueden consultarse después con una API de GraphQL estándar. Actualmente, existe un servicio alojado (hosted) y un protocolo descentralizado con las mismas capacidades. Ambos están respaldados por la implementación de código abierto de [Graph Node](https://github.com/graphprotocol/graph-node). -## How The Graph Works +## ¿Cómo funciona The Graph? -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +The Graph aprende, qué y cómo indexar los datos de Ethereum, basándose en las descripciones de los subgrafos, conocidas como el manifiesto de los subgrafos. La descripción del subgrafo define los contratos inteligentes de interés para este subgrafo, los eventos en esos contratos a los que prestar atención, y cómo mapear los datos de los eventos a los datos que The Graph almacenará en su base de datos. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +Una vez que has escrito el `subgraph manifest`, utilizas el CLI de The Graph para almacenar la definición en IPFS y decirle al indexador que empiece a indexar los datos de ese subgrafo. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +Este diagrama ofrece más detalles sobre el flujo de datos una vez que se ha desplegado en el manifiesto para un subgrafo, que trata de las transacciones en Ethereum: ![](/img/graph-dataflow.png) -The flow follows these steps: +El flujo sigue estos pasos: -1. A decentralized application adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. Una aplicación descentralizada añade datos a Ethereum a través de una transacción en un contrato inteligente. +2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. +3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de su subgrafo que puedan contener. +4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. +5. La aplicación descentralizada consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La aplicación descentralizada muestra estos datos en una interfaz muy completa para el usuario, a fin de que los cliente que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. Y así... el ciclo se repite continuamente. -## Next Steps +## Próximos puntos -In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +En las siguientes secciones entraremos en más detalles sobre cómo definir subgrafos, cómo desplegarlos y cómo consultar los datos de los índices que construye el Graph Node. -Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Antes de que empieces a escribir tu propio subgrafo, es posible que debas echar un vistazo a The Graph Explorer para explorar algunos de los subgrafos que ya han sido desplegados. La página de cada subgrafo contiene un playground que te permite consultar los datos de ese subgrafo usando GraphQL. From bdd4d1fbe0f1246aec77ebfe6f3f146910bd60f3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:20 -0500 Subject: [PATCH 122/241] New translations deprecating-a-subgraph.mdx (Arabic) --- pages/ar/developer/deprecating-a-subgraph.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ar/developer/deprecating-a-subgraph.mdx b/pages/ar/developer/deprecating-a-subgraph.mdx index f8966e025c13..2d83064709da 100644 --- a/pages/ar/developer/deprecating-a-subgraph.mdx +++ b/pages/ar/developer/deprecating-a-subgraph.mdx @@ -1,17 +1,17 @@ --- -title: Deprecating a Subgraph +title: إهمال Subgraph --- -So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: +إن كنت ترغب في إهمال الـ subgraph الخاص بك في The Graph Explorer. فأنت في المكان المناسب! اتبع الخطوات أدناه: -1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) -2. Call 'deprecateSubgraph' with your own address as the first parameter -3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. -4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` -5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: +1. قم بزيارة عنوان العقد [ هنا ](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) +2. استدعِ "devecateSubgraph" بعنوانك الخاص كأول بارامتر +3. في حقل "subgraphNumber" ، قم بإدراج 0 إذا كان أول subgraph تنشره ، 1 إذا كان الثاني ، 2 إذا كان الثالث ، إلخ. +4. يمكن العثور على مدخلات # 2 و # 3 في `` الخاص بك والذي يتكون من `{graphAccount}-{subgraphNumber}`. فمثلا، [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID هو `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`,وهو مزيج من `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` و `subgraphNumber` = `<0>` +5. هاهو! لن يظهر الـ subgraph بعد الآن في عمليات البحث في The Graph Explorer. يرجى ملاحظة ما يلي: -- Curators will not be able to signal on the subgraph anymore -- Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price -- Deprecated subgraphs will be indicated with an error message. +- لن يتمكن المنسقون من الإشارة على الـ subgraph بعد الآن +- سيتمكن المنشقون الذين قد أشاروا شابقا على الـ subgraph من سحب إشاراتهم بمتوسط سعر السهم +- ستتم تحديد الـ subgraphs المهملة برسالة خطأ. -If you interacted with the now deprecated subgraph, you'll be able to find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab respectively. +إذا تفاعلت مع الـ subgraph المهمل ، فستتمكن من العثور عليه في ملف تعريف المستخدم الخاص بك ضمن علامة التبويب "Subgraphs" أو "Indexing" أو "Curating" على التوالي. From 70f9eac74ddc5d564897af55f84f8057da749f49 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:21 -0500 Subject: [PATCH 123/241] New translations define-subgraph-hosted.mdx (Spanish) --- pages/es/developer/define-subgraph-hosted.mdx | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/es/developer/define-subgraph-hosted.mdx b/pages/es/developer/define-subgraph-hosted.mdx index 92bf5bd8cd2f..64011dddac02 100644 --- a/pages/es/developer/define-subgraph-hosted.mdx +++ b/pages/es/developer/define-subgraph-hosted.mdx @@ -1,34 +1,34 @@ --- -title: Define a Subgraph +title: Definir un Subgrafo --- -A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. +Un subgrafo define los datos que The Graph indexará de Ethereum, y cómo los almacenará. Una vez desplegado, formará parte de un gráfico global de datos de la blockchain. -![Define a Subgraph](/img/define-subgraph.png) +![Definir un Subgrafo](/img/define-subgraph.png) -The subgraph definition consists of a few files: +La definición del subgrafo consta de unos cuantos archivos: -- `subgraph.yaml`: a YAML file containing the subgraph manifest +- `subgraph.yaml`: un archivo YAML que contiene el manifiesto del subgrafo -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +- `schema.graphql`: un esquema GraphQL que define qué datos se almacenan para su subgrafo, y cómo consultarlos a través de GraphQL -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) codigo que traduce de los datos del evento a las entidades definidas en su esquema (por ejemplo `mapping.ts` en este tutorial) -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) which you will need to build and deploy a subgraph. +Antes de entrar en detalles sobre el contenido del archivo de manifiesto, es necesario instalar el [Graph CLI](https://github.com/graphprotocol/graph-cli) que necesitarás para construir y desplegar un subgrafo. -## Install the Graph CLI +## Instalar The Graph CLI -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +The Graph CLI está escrito en JavaScript, y tendrás que instalar `yarn` o `npm` para utilizarlo; se supone que tienes yarn en lo que sigue. -Once you have `yarn`, install the Graph CLI by running +Una vez que tengas `yarn`, instala The Graph CLI ejecutando -**Install with yarn:** +**Instalar con yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +**Instalar con npm:** ```bash npm install -g @graphprotocol/graph-cli From 6189f2afe689b646d78d964876f529f4445ba84d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:22 -0500 Subject: [PATCH 124/241] New translations define-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/define-subgraph-hosted.mdx | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/pages/ar/developer/define-subgraph-hosted.mdx b/pages/ar/developer/define-subgraph-hosted.mdx index 92bf5bd8cd2f..5b6e87beb774 100644 --- a/pages/ar/developer/define-subgraph-hosted.mdx +++ b/pages/ar/developer/define-subgraph-hosted.mdx @@ -1,34 +1,34 @@ --- -title: Define a Subgraph +title: تعريف Subgraph --- -A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. +يحدد ال Subgraph البيانات التي سيقوم TheGraph بفهرستها من الايثيريوم ، وكيف سيتم تخزينها. بمجرد نشرها ، ستشكل جزءا من رسم graph عالمي لبيانات blockchain. -![Define a Subgraph](/img/define-subgraph.png) +![تعريف Subgraph](/img/define-subgraph.png) -The subgraph definition consists of a few files: +يتكون تعريف Subgraph من عدة ملفات: -- `subgraph.yaml`: a YAML file containing the subgraph manifest +- `Subgraph.yaml`ملف YAML يحتوي على Subgraph manifest -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +- `schema.graphql`: مخطط GraphQL يحدد البيانات المخزنة في Subgraph وكيفية الاستعلام عنها عبر GraphQL - `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) which you will need to build and deploy a subgraph. +قبل الخوض في التفاصيل حول محتويات ملف manifest ، تحتاج إلى تثبيت [Graph CLI](https://github.com/graphprotocol/graph-cli) والذي سوف تحتاجه لبناء ونشر Subgraph. -## Install the Graph CLI +## قم بتثبيت Graph CLI -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +تمت كتابة Graph CLI بلغة JavaScript ، وستحتاج إلى تثبيتها أيضًا `yarn` or `npm` لتستخدمها؛ من المفترض أن يكون لديك yarn فيما يلي. -Once you have `yarn`, install the Graph CLI by running +بمجرد حصولك على `yarn` ، قم بتثبيت Graph CLI عن طريق التشغيل -**Install with yarn:** +**التثبيت بواسطة yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +**التثبيت بواسطة npm:** ```bash npm install -g @graphprotocol/graph-cli From 28a1201b4dbb8d8bcefa345b8f1249385b2250b3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:24 -0500 Subject: [PATCH 125/241] New translations define-subgraph-hosted.mdx (Chinese Simplified) --- pages/zh/developer/define-subgraph-hosted.mdx | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/zh/developer/define-subgraph-hosted.mdx b/pages/zh/developer/define-subgraph-hosted.mdx index 92bf5bd8cd2f..17484f0deb7a 100644 --- a/pages/zh/developer/define-subgraph-hosted.mdx +++ b/pages/zh/developer/define-subgraph-hosted.mdx @@ -1,34 +1,34 @@ --- -title: Define a Subgraph +title: 定义子图 --- -A subgraph defines which data The Graph will index from Ethereum, and how it will store it. Once deployed, it will form a part of a global graph of blockchain data. +子图定义了Graph从以太坊索引哪些数据,以及如何存储这些数据。 子图一旦部署,就成为区块链数据全局图的一部分。 -![Define a Subgraph](/img/define-subgraph.png) +![定义子图](/img/define-subgraph.png) -The subgraph definition consists of a few files: +子图定义由几个文件组成: -- `subgraph.yaml`: a YAML file containing the subgraph manifest +- `subgraph.yaml`: 包含子图清单的 YAML 文件 -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +- `schema.graphql`: 一个 GraphQL 模式文件,它定义了为您的子图存储哪些数据,以及如何通过 GraphQL 查询这些数据 -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +- `AssemblyScript映射`: 将事件数据转换为模式中定义的实体(例如本教程中的`mapping.ts`)的 [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) 代码 -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) which you will need to build and deploy a subgraph. +在详细了解清单文件的内容之前,您需要安装[Graph CLI](https://github.com/graphprotocol/graph-cli),以构建和部署子图。 -## Install the Graph CLI +## 安装Graph CLI -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +Graph CLI是使用 JavaScript 编写的,您需要安装`yarn`或 `npm`才能使用它;以下教程中假设您已经安装了yarn。 -Once you have `yarn`, install the Graph CLI by running +一旦您安装了`yarn`,可以通过运行以下命令安装 Graph CLI -**Install with yarn:** +**使用yarn安装:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +**使用npm安装:** ```bash npm install -g @graphprotocol/graph-cli From ccf4766e1076e58218a17f99645a48eb8a0bf3b6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:26 -0500 Subject: [PATCH 126/241] New translations deprecating-a-subgraph.mdx (Spanish) --- pages/es/developer/deprecating-a-subgraph.mdx | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/es/developer/deprecating-a-subgraph.mdx b/pages/es/developer/deprecating-a-subgraph.mdx index f8966e025c13..28448746b0a5 100644 --- a/pages/es/developer/deprecating-a-subgraph.mdx +++ b/pages/es/developer/deprecating-a-subgraph.mdx @@ -1,17 +1,17 @@ --- -title: Deprecating a Subgraph +title: Deprecar un Subgrafo --- -So you'd like to deprecate your subgraph on The Graph Explorer. You've come to the right place! Follow the steps below: +Así que te gustaría deprecar tu subgrafo en The Graph Explorer. Has venido al lugar adecuado! Sigue los siguientes pasos: -1. Visit the contract address [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) -2. Call 'deprecateSubgraph' with your own address as the first parameter -3. In the 'subgraphNumber' field, list 0 if it's the first subgraph you're publishing, 1 if it's your second, 2 if it's your third, etc. -4. Inputs for #2 and #3 can be found in your `` which is composed of the `{graphAccount}-{subgraphNumber}`. For example, the [Sushi Subgraph's](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, which is a combination of `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` and `subgraphNumber` = `<0>` -5. Voila! Your subgraph will no longer show up on searches on The Graph Explorer. Please note the following: +1. Visita el address del contrato [aquí](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) +2. Llama a 'deprecateSubgraph' con tu propia dirección como primer parámetro +3. En el campo 'subgraphNumber', anota 0 si es el primer subgrafo que publicas, 1 si es el segundo, 2 si es el tercero, etc. +4. Las entradas para #2 y #3 se pueden encontrar en tu `` que está compuesto por `{graphAccount}-{subgraphNumber}`. Por ejemplo, el [Subgrafo de Sushi](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&version=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0-0&view=Overview) ID is `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0>`, que es una combinación de `graphAccount` = `<0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0>` y `subgraphNumber` = `<0>` +5. Voila! Tu subgrafo ya no aparecerá en las búsquedas en The Graph Explorer. Ten en cuenta lo siguiente: -- Curators will not be able to signal on the subgraph anymore -- Curators that already signaled on the subgraph will be able to withdraw their signal at an average share price -- Deprecated subgraphs will be indicated with an error message. +- Los curadores ya no podrán señalar en el subgrafo +- Los curadores que ya hayan señalado en el subgrafo podrán retirar su señal a un precio promedio de la participación +- Los subgrafos deprecados se indicarán con un mensaje de error. -If you interacted with the now deprecated subgraph, you'll be able to find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab respectively. +Si interactuaste con el ahora subgrafo deprecado, podrás encontrarlo en tu perfil de usuario en la pestaña "Subgraphs", "Indexing" o "Curating" respectivamente. From e38f84190a739254ce7c4705d1c3cb2f2a7a5b0b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:29 -0500 Subject: [PATCH 127/241] New translations developer-faq.mdx (Spanish) --- pages/es/developer/developer-faq.mdx | 120 +++++++++++++-------------- 1 file changed, 60 insertions(+), 60 deletions(-) diff --git a/pages/es/developer/developer-faq.mdx b/pages/es/developer/developer-faq.mdx index 41449c60e5ab..ed6de912d75e 100644 --- a/pages/es/developer/developer-faq.mdx +++ b/pages/es/developer/developer-faq.mdx @@ -1,70 +1,70 @@ --- -title: Developer FAQs +title: Preguntas Frecuentes de los Desarrolladores --- -### 1. Can I delete my subgraph? +### 1. ¿Puedo eliminar mi subgrafo? -It is not possible to delete subgraphs once they are created. +No es posible eliminar los subgrafos una vez creados. -### 2. Can I change my subgraph name? +### 2. ¿Puedo cambiar el nombre de mi subgrafo? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +No. Una vez creado un subgrafo, no se puede cambiar el nombre. Asegúrate de pensar en esto cuidadosamente antes de crear tu subgrafo para que sea fácilmente buscable e identificable por otras dapps. -### 3. Can I change the GitHub account associated with my subgraph? +### 3. ¿Puedo cambiar la cuenta de GitHub asociada a mi subgrafo? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +No. Una vez creado un subgrafo, la cuenta de GitHub asociada no puede ser modificada. Asegúrate de pensarlo bien antes de crear tu subgrafo. -### 4. Am I still able to create a subgraph if my smart contracts don't have events? +### 4. ¿Puedo crear un subgrafo si mis contratos inteligentes no tienen eventos? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. +Es muy recomendable que estructures tus contratos inteligentes para tener eventos asociados a los datos que te interesa consultar. Los handlers de eventos en el subgrafo son activados por los eventos de los contratos, y son, con mucho, la forma más rápida de recuperar datos útiles. -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. +Si los contratos con los que trabajas no contienen eventos, tu subgrafo puede utilizar handlers de llamadas y bloques para activar la indexación. Aunque esto no se recomienda, ya que el rendimiento será significativamente más lento. -### 5. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. ¿Es posible desplegar un subgrafo con el mismo nombre para varias redes? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +Necesitarás nombres distintos para varias redes. Aunque no se pueden tener diferentes subgrafos bajo el mismo nombre, hay formas convenientes de tener una sola base de código para múltiples redes. Encontrará más información al respecto en nuestra documentación: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. How are templates different from data sources? +### 6. ¿En qué se diferencian las plantillas de las fuentes de datos? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +Las plantillas permiten crear fuentes de datos sobre la marcha, mientras el subgrafo se indexa. Puede darse el caso de que tu contrato genere nuevos contratos a medida que la gente interactúe con él, y dado que conoces la forma de esos contratos (ABI, eventos, etc) por adelantado, puedes definir cómo quieres indexarlos en una plantilla y, cuando se generen, tu subgrafo creará una fuente de datos dinámica proporcionando la dirección del contrato. -Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). +Consulta la sección "Instalar un modelo de fuente de datos" en: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). -### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 7. ¿Cómo puedo asegurarme de que estoy utilizando la última versión de graph-node para mis despliegues locales? -You can run the following command: +Puede ejecutar el siguiente comando: ```sh docker pull graphprotocol/graph-node:latest ``` -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +**NOTA:** docker / docker-compose siempre utilizará la versión de graph-node que se sacó la primera vez que se ejecutó, por lo que es importante hacer esto para asegurarse de que estás al día con la última versión de graph-node. -### 8. How do I call a contract function or access a public state variable from my subgraph mappings? +### 8. ¿Cómo puedo llamar a una función de contrato o acceder a una variable de estado pública desde mis mapeos de subgrafos? -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). +Echa un vistazo al estado `Access to smart contract` dentro de la sección [AssemblyScript API](/developer/assemblyscript-api). -### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 9. ¿Es posible configurar un subgrafo usando `graph init` desde `graph-cli` con dos contratos? ¿O debo añadir manualmente otra fuente de datos en `subgraph.yaml` después de ejecutar `graph init`? -Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. +Lamentablemente, esto no es posible en la actualidad. `graph init` está pensado como un punto de partida básico, a partir del cual puedes añadir más fuentes de datos manualmente. -### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? +### 10. Quiero contribuir o agregar una cuestión en GitHub, ¿dónde puedo encontrar los repositorios de código abierto? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 11. ¿Cuál es la forma recomendada de construir ids "autogenerados" para una entidad cuando se manejan eventos? -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +Si sólo se crea una entidad durante el evento y si no hay nada mejor disponible, entonces el hash de la transacción + el índice del registro serían únicos. Puedes ofuscar esto convirtiendo eso en Bytes y luego pasándolo por `crypto.keccak256` pero esto no lo hará más único. -### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 12. Cuando se escuchan varios contratos, ¿es posible seleccionar el orden de los contratos para escuchar los eventos? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. -### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? +### 13. ¿Es posible diferenciar entre redes (mainnet, Kovan, Ropsten, local) desde los handlers de eventos? -Yes. You can do this by importing `graph-ts` as per the example below: +Sí. Puedes hacerlo importando `graph-ts` como en el ejemplo siguiente: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,39 +73,39 @@ dataSource.network() dataSource.address() ``` -### 14. Do you support block and call handlers on Rinkeby? +### 14. ¿Apoyan el bloqueo y los handlers de llamadas en Rinkeby? -On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. +En Rinkeby apoyamos los handlers de bloque, pero sin `filter: call`. Los handlers de llamadas no son compatibles por el momento. -### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? +### 15. ¿Puedo importar ethers.js u otras bibliotecas JS en mis mapeos de subgrafos? -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Actualmente no, ya que los mapeos se escriben en AssemblyScript. Una posible solución alternativa a esto es almacenar los datos en bruto en entidades y realizar la lógica que requiere las bibliotecas JS en el cliente. -### 16. Is it possible to specifying what block to start indexing on? +### 16. ¿Es posible especificar en qué bloque se inicia la indexación? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks +Sí. `dataSources.source.startBlock` en el `subgraph.yaml` especifica el número del bloque a partir del cual la fuente de datos comienza a indexar. En la mayoría de los casos, sugerimos utilizar el bloque en el que se creó el contrato: Bloques de inicio -### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. +### 17. ¿Hay algunos consejos para aumentar el rendimiento de la indexación? Mi subgrafo está tardando mucho en sincronizarse. -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) +Sí, deberías echar un vistazo a la función opcional de inicio de bloque para comenzar la indexación desde el bloque en el que se desplegó el contrato: [Start blocks](/developer/create-subgraph-hosted#start-blocks) -### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? +### 18. ¿Hay alguna forma de consultar directamente el subgrafo para determinar cuál es el último número de bloque que ha indexado? -Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +¡Sí! Prueba el siguiente comando, sustituyendo "organization/subgraphName" por la organización bajo la que se publica y el nombre de tu subgrafo: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. What networks are supported by The Graph? +### 19. ¿Qué redes son compatibles con The Graph? -The graph-node supports any EVM-compatible JSON RPC API chain. +The Graph Node admite cualquier cadena de API JSON RPC compatible con EVM. -The Graph Network supports subgraphs indexing mainnet Ethereum: +The Graph Network admite subgrafos que indexan la red principal de Ethereum: - `mainnet` -In the Hosted Service, the following networks are supported: +En el Servicio Alojado, se admiten las siguientes redes: - Ethereum mainnet - Kovan @@ -133,40 +133,40 @@ In the Hosted Service, the following networks are supported: - Optimism - Optimism Testnet (on Kovan) -There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). +Se está trabajando en la integración de otras blockchains, puedes leer más en nuestro repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). -### 20. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 20. ¿Es posible duplicar un subgrupo en otra cuenta o endpoint sin volver a desplegarlo? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +Tienes que volver a desplegar el subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. -### 21. Is this possible to use Apollo Federation on top of graph-node? +### 21. ¿Es posible utilizar Apollo Federation sobre graph-node? -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. +Federation aún no es compatible, aunque queremos apoyarla en el futuro. Por el momento, algo que se puede hacer es utilizar el stitching de esquemas, ya sea en el cliente o a través de un servicio proxy. -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. ¿Existe un límite en el número de objetos que The Graph puede devolver por consulta? -By default query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that you can paginate with: +Por defecto, las respuestas a las consultas están limitadas a 100 elementos por colección. Si quieres recibir más, puedes llegar hasta 1000 artículos por colección y más allá puedes paginar con: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. Si mi dapp frontend utiliza The Graph para la consulta, ¿tengo que escribir mi clave de consulta en el frontend directamente? Si pagamos tasas de consulta a los usuarios, ¿los usuarios malintencionados harán que nuestras tasas de consulta sean muy altas? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a host name, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Actualmente, el enfoque recomendado para una dapp es añadir la clave al frontend y exponerla a los usuarios finales. Dicho esto, puedes limitar esa clave a un nombre de host, como _yourdapp.io_ y subgrafo. El gateway está siendo gestionado actualmente por Edge & Node. Parte de la responsabilidad de un gateway es vigilar los comportamientos abusivos y bloquear el tráfico de los clientes maliciosos. -### 24. Where do I go to find my current subgraph on the Hosted Service? +### 24. ¿Dónde puedo encontrar mi subgrafo actual en el Servicio Alojado? -Head over to the Hosted Service in order to find subgraphs that you or others deployed to the Hosted Service. You can find it [here.](https://thegraph.com/hosted-service) +Dirígete al Servicio Alojado para encontrar los subgrafos que tú u otros desplegaron en el Servicio Alojado. Puedes encontrarlo [aquí.](https://thegraph.com/hosted-service) -### 25. Will the Hosted Service start charging query fees? +### 25. ¿Comenzará el Servicio Alojado a cobrar tasas de consulta? -The Graph will never charge for the Hosted Service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The Hosted Service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to migrate to the decentralized network as they are comfortable. +The Graph nunca cobrará por el Servicio Alojado. The Graph es un protocolo descentralizado, y cobrar por un servicio centralizado no está alineado con los valores de The Graph. El Servicio Alojado siempre fue un paso temporal para ayudar a llegar a la red descentralizada. Los desarrolladores dispondrán de tiempo suficiente para migrar a la red descentralizada a medida que se sientan cómodos. -### 26. When will the Hosted Service be shut down? +### 26. ¿Cuándo se cerrará el Servicio Alojado? -If and when there are plans to do this, the community will be notified well ahead of time with considerations made for any subgraphs built on the Hosted Service. +Si y cuando se planee hacer esto, se notificará a la comunidad con suficiente antelación y se tendrán en cuenta los subgrafos construidos en el Servicio Alojado. -### 27. How do I upgrade a subgraph on mainnet? +### 27. ¿Cómo puedo actualizar un subgrafo en mainnet? -If you’re a subgraph developer, you can upgrade a new version of your subgraph to the Studio using the CLI. It’ll be private at that point but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +Si eres un desarrollador de subgrafos, puedes actualizar una nueva versión de tus subgrafos a Studio utilizando la CLI. En ese momento será privado, pero si estás contento con él, puedes publicarlo en the Graph Explorer descentralizado. Esto creará una nueva versión de tu subgrafo que los Curadoress pueden empezar a señalar. From 911af3caa020b45fa56b73b3734ccf8cd90f6452 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:30 -0500 Subject: [PATCH 128/241] New translations developer-faq.mdx (Arabic) --- pages/ar/developer/developer-faq.mdx | 90 ++++++++++++++-------------- 1 file changed, 45 insertions(+), 45 deletions(-) diff --git a/pages/ar/developer/developer-faq.mdx b/pages/ar/developer/developer-faq.mdx index 41449c60e5ab..670d557c121c 100644 --- a/pages/ar/developer/developer-faq.mdx +++ b/pages/ar/developer/developer-faq.mdx @@ -1,70 +1,70 @@ --- -title: Developer FAQs +title: الأسئلة الشائعة للمطورين --- -### 1. Can I delete my subgraph? +### 1. هل يمكنني حذف ال Subgraph الخاص بي؟ -It is not possible to delete subgraphs once they are created. +لا يمكن حذف ال Subgraph بمجرد إنشائها. -### 2. Can I change my subgraph name? +### 2. هل يمكنني تغيير اسم ال Subgraph الخاص بي؟ -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير الاسم. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك حتى يسهل البحث عنه والتعرف عليه من خلال ال Dapps الأخرى. -### 3. Can I change the GitHub account associated with my subgraph? +### 3. هل يمكنني تغيير حساب GitHub المرتبط ب Subgraph الخاص بي؟ -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير حساب GitHub المرتبط. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك. -### 4. Am I still able to create a subgraph if my smart contracts don't have events? +### 4. هل يمكنني إنشاء Subgraph إذا لم تكن العقود الذكية الخاصة بي تحتوي على أحداث؟ -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events, and are by far the fastest way to retrieve useful data. +من المستحسن جدا أن تقوم بإنشاء عقودك الذكية بحيث يكون لديك أحداث مرتبطة بالبيانات التي ترغب في الاستعلام عنها. يتم تشغيل معالجات الأحداث في subgraph بواسطة أحداث العقد، وهي إلى حد بعيد أسرع طريقة لاسترداد البيانات المفيدة. -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended as performance will be significantly slower. +إذا كانت العقود التي تعمل معها لا تحتوي على أحداث، فيمكن أن يستخدم ال Subgraph معالجات الاتصال والحظر لتشغيل الفهرسة. وهذا غير موصى به لأن الأداء سيكون أبطأ بشكل ملحوظ. -### 5. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. هل من الممكن نشر Subgraph واحد تحمل نفس الاسم لشبكات متعددة؟ -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) +ستحتاج إلى أسماء مختلفه لشبكات متعددة. ولا يمكن أن يكون لديك Subgraph مختلف تحت نفس الاسم ، إلا أن هناك طرقًا ملائمة لأمتلاك قاعدة بيانات واحدة لشبكات متعددة. اكتشف المزيد حول هذا الأمر في وثائقنا: [ إعادة نشر ال Subgraph ](/hosted-service/deploy-subgraph-hosted#redeploying-a-subgraph) -### 6. How are templates different from data sources? +### 6. كيف تختلف النماذج عن مصادر البيانات؟ -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) up front you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +تسمح لك النماذج بإنشاء مصادر البيانات على الفور ، أثناء فهرسة ال Subgraph الخاص بك. قد يكون الأمر هو أن عقدك سينتج عنه عقود جديدة عندما يتفاعل الأشخاص معه ، وبما أنك تعرف شكل هذه العقود (ABI ، الأحداث ، إلخ) مسبقًا ، يمكنك تحديد الطريقة التي تريد فهرستها بها في النموذج ومتى يتم إنتاجها ، وسيقوم ال Subgraph الخاص بك بإنشاء مصدر بيانات ديناميكي عن طريق توفير عنوان العقد. -Check out the "Instantiating a data source template" section on: [Data Source Templates](/developer/create-subgraph-hosted#data-source-templates). +راجع قسم "إنشاء نموذج مصدر بيانات" في: [ نماذج مصدر البيانات ](/developer/create-subgraph-hosted#data-source-templates). -### 7. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 7. كيف أتأكد من أنني أستخدم أحدث إصدار من graph-node لعمليات النشر المحلية الخاصة بي؟ -You can run the following command: +يمكنك تشغيل الأمر التالي: ```sh docker pull graphprotocol/graph-node:latest ``` -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +** ملاحظة: ** ستستخدم docker / docker-compose دائمًا أي إصدار من graph-node تم سحبه في المرة الأولى التي قمت بتشغيلها ، لذلك من المهم القيام بذلك للتأكد من أنك محدث بأحدث إصدار graph-node. -### 8. How do I call a contract function or access a public state variable from my subgraph mappings? +### 8. كيف يمكنني استدعاء دالة العقد أو الوصول إلى متغير الحالة العامة من Subgraph mappings الخاصة بي؟ -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developer/assemblyscript-api). +ألقِ نظرة على حالة `الوصول إلى العقد الذكي` داخل القسم [ AssemblyScript API ](/developer/assemblyscript-api). -### 9. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 9. هل من الممكن إنشاء Subgraph باستخدام`graph init` from `graph-cli`بعقدين؟ أو هل يجب علي إضافة مصدر بيانات آخر يدويًا في `subgraph.yaml` بعد تشغيل `graph init`؟ -Unfortunately this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually. +للأسف هذا غير ممكن حاليا. الغرض من `graph init` هو أن تكون نقطة بداية أساسية حيث يمكنك من خلالها إضافة المزيد من مصادر البيانات يدويًا. -### 10. I want to contribute or add a GitHub issue, where can I find the open source repositories? +### 10. أرغب في المساهمة أو إضافة مشكلة GitHub ، أين يمكنني العثور على مستودعات مفتوحة المصدر؟ - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-cli](https://github.com/graphprotocol/graph-cli) - [graph-ts](https://github.com/graphprotocol/graph-ts) -### 11. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 11. ما هي الطريقة الموصى بها لإنشاء معرفات "تلقائية" لكيان عند معالجة الأحداث؟ -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +إذا تم إنشاء كيان واحد فقط أثناء الحدث ولم يكن هناك أي شيء متاح بشكل أفضل ، فسيكون hash الإجراء + فهرس السجل فريدا. يمكنك تشويشها عن طريق تحويلها إلى Bytes ثم تمريرها عبر `crypto.keccak256` ولكن هذا لن يجعلها فريدة من نوعها. -### 12. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 12. عند الاستماع إلى عدة عقود ، هل من الممكن تحديد أمر العقد للاستماع إلى الأحداث؟ -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. -### 13. Is it possible to differentiate between networks (mainnet, Kovan, Ropsten, local) from within event handlers? +### 13. هل من الممكن التفريق بين الشبكات (mainnet، Kovan، Ropsten، local) من داخل معالجات الأحداث؟ -Yes. You can do this by importing `graph-ts` as per the example below: +نعم. يمكنك القيام بذلك عن طريق استيراد `graph-ts` كما في المثال أدناه: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -73,39 +73,39 @@ dataSource.network() dataSource.address() ``` -### 14. Do you support block and call handlers on Rinkeby? +### 14. هل تدعم معالجات الكتل والإستدعاء على Rinkeby؟ -On Rinkeby we support block handlers, but without `filter: call`. Call handlers are not supported for the time being. +في Rinkeby ، ندعم معالجات الكتل ، لكن بدون `filter: call`. معالجات الاستدعاء غير مدعومة في الوقت الحالي. -### 15. Can I import ethers.js or other JS libraries into my subgraph mappings? +### 15. هل يمكنني استيراد ethers.js أو مكتبات JS الأخرى إلى ال Subgraph mappings الخاصة بي؟ -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +ليس حاليًا ، حيث تتم كتابة ال mappings في AssemblyScript. أحد الحلول البديلة الممكنة لذلك هو تخزين البيانات الأولية في الكيانات وتنفيذ المنطق الذي يتطلب مكتبات JS على ال client. -### 16. Is it possible to specifying what block to start indexing on? +### 16. هل من الممكن تحديد الكتلة التي سيتم بدء الفهرسة عليها؟ -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created: Start blocks +نعم. يحدد `dataSources.source.startBlock` في ملف `subgraph.yaml` رقم الكتلة الذي يبدأ مصدر البيانات الفهرسة منها. في معظم الحالات نقترح استخدام الكتلة التي تم إنشاء العقد من خلالها: Start blocks -### 17. Are there some tips to increase performance of indexing? My subgraph is taking a very long time to sync. +### 17. هل هناك بعض النصائح لتحسين أداء الفهرسة؟ تستغرق مزامنة ال subgraph وقتًا طويلاً جدًا. -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developer/create-subgraph-hosted#start-blocks) +نعم ، يجب إلقاء نظرة على ميزة start block الاختيارية لبدء الفهرسة من الكتل التي تم نشر العقد: [ start block ](/developer/create-subgraph-hosted#start-blocks) -### 18. Is there a way to query the subgraph directly to determine what the latest block number it has indexed? +### 18. هل هناك طريقة للاستعلام عن ال Subgraph بشكل مباشر مباشرةً رقم الكتلة الأخير الذي تمت فهرسته؟ -Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +نعم! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 19. What networks are supported by The Graph? +### 19. ما هي الشبكات الذي يدعمها The Graph؟ -The graph-node supports any EVM-compatible JSON RPC API chain. +تدعم graph-node أي سلسلة API JSON RPC متوافقة مع EVM. -The Graph Network supports subgraphs indexing mainnet Ethereum: +شبكة The Graph تدعم ال subgraph وذلك لفهرسة mainnet Ethereum: - `mainnet` -In the Hosted Service, the following networks are supported: +في ال Hosted Service ، يتم دعم الشبكات التالية: - Ethereum mainnet - Kovan @@ -129,9 +129,9 @@ In the Hosted Service, the following networks are supported: - Fuse - Moonbeam - Arbitrum One -- Arbitrum Testnet (on Rinkeby) +- (Arbitrum Testnet (on Rinkeby - Optimism -- Optimism Testnet (on Kovan) +- (Optimism Testnet (on Kovan There is work in progress towards integrating other blockchains, you can read more in our repo: [RFC-0003: Multi-Blockchain Support](https://github.com/graphprotocol/rfcs/pull/8/files). From f390bd57db0a7cb653cf9190a9a26793e70cdacb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:33 -0500 Subject: [PATCH 129/241] New translations distributed-systems.mdx (Spanish) --- pages/es/developer/distributed-systems.mdx | 50 +++++++++++----------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/pages/es/developer/distributed-systems.mdx b/pages/es/developer/distributed-systems.mdx index 894fcbe2e18b..bfbc733c4107 100644 --- a/pages/es/developer/distributed-systems.mdx +++ b/pages/es/developer/distributed-systems.mdx @@ -1,37 +1,37 @@ --- -title: Distributed Systems +title: Sistemas Distribuidos --- -The Graph is a protocol implemented as a distributed system. +The Graph es un protocolo implementado como un sistema distribuido. -Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. +Las conexiones fallan. Las solicitudes llegan fuera de orden. Diferentes computadoras con relojes y estados desincronizados procesan solicitudes relacionadas. Los servidores se reinician. Las reorganizaciones se producen entre las solicitudes. Estos problemas son inherentes a todos los sistemas distribuidos, pero se agravan en los sistemas que funcionan a escala mundial. -Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. +Considera este ejemplo de lo que puede ocurrir si un cliente pregunta a un Indexador por los últimos datos durante una reorganización. -1. Indexer ingests block 8 -2. Request served to the client for block 8 -3. Indexer ingests block 9 -4. Indexer ingests block 10A -5. Request served to the client for block 10A -6. Indexer detects reorg to 10B and rolls back 10A -7. Request served to the client for block 9 -8. Indexer ingests block 10B -9. Indexer ingests block 11 -10. Request served to the client for block 11 +1. El indexador ingiere el bloque 8 +2. Solicitud servida al cliente para el bloque 8 +3. El indexador ingiere el bloque 9 +4. El indexador ingiere el bloque 10A +5. Solicitud servida al cliente para el bloque 10A +6. El indexador detecta la reorganización a 10B y retrocede a 10A +7. Solicitud servida al cliente para el bloque 9 +8. El indexador ingiere el bloque 10B +9. El indexador ingiere el bloque 11 +10. Solicitud servida al cliente para el bloque 11 -From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. +Desde el punto de vista del indexador, las cosas avanzan lógicamente. El tiempo avanza, aunque tuvimos que hacer retroceder un uncle bloque y jugar el bloque bajo el consenso hacia adelante en la parte superior. En el camino, el Indexador sirve las peticiones utilizando el último estado que conoce en ese momento. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +Sin embargo, desde el punto de vista del cliente, las cosas parecen caóticas. El cliente observa que las respuestas fueron para los bloques 8, 10, 9 y 11 en ese orden. Lo llamamos el problema del "block wobble" (bamboleo del bloque). Cuando un cliente experimenta un bamboleo de bloques, los datos pueden parecer contradecirse a lo largo del tiempo. La situación se agrava si tenemos en cuenta que no todos los indexadores ingieren los últimos bloques de forma simultánea, y tus peticiones pueden ser dirigidas a varios indexadores. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +Es responsabilidad del cliente y del servidor trabajar juntos para proporcionar datos coherentes al usuario. Hay que utilizar diferentes enfoques en función de la coherencia deseada, ya que no existe un programa adecuado para todos los problemas. -Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. +Razonar las implicancias de los sistemas distribuidos es difícil, pero la solución puede no serlo! Hemos establecido APIs y patrones para ayudarte a navegar por algunos casos de uso comunes. Los siguientes ejemplos ilustran estos patrones pero eluden los detalles requeridos por el código de producción (como el manejo de errores y la cancelación) para no ofuscar las ideas principales. -## Polling for updated data +## Sondeo para obtener datos actualizados -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph proporciona la API `block: { number_gte: $minBlock }`, que asegura que la respuesta es para un solo bloque igual o superior a `$minBlock`. Si la petición se realiza a una instancia de `graph-node` y el bloque mínimo no está aún sincronizado, `graph-node` devolverá un error. Si `graph-node` ha sincronizado el bloque mínimo, ejecutará la respuesta para el último bloque. Si la solicitud se hace a un Edge & Node Gateway, el Gateway filtrará los Indexadores que aún no hayan sincronizado el bloque mínimo y hará la solicitud para el último bloque que el Indexador haya sincronizado. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +Podemos utilizar `number_gte` para asegurarnos de que el tiempo nunca viaja hacia atrás cuando se realizan sondeos de datos en un loop. Aquí hay un ejemplo: ```javascript /// Updates the protocol.paused variable to the latest @@ -73,11 +73,11 @@ async function updateProtocolPaused() { } ``` -## Fetching a set of related items +## Obtención de un conjunto de elementos relacionados -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +Otro caso de uso es la recuperación de un conjunto grande o, más generalmente, la recuperación de elementos relacionados a través de múltiples solicitudes. A diferencia del caso del sondeo (en el que la coherencia deseada era avanzar en el tiempo), la coherencia deseada es para un único punto en el tiempo. -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +Aquí utilizaremos el argumento `block: { hash: $blockHash }` para anclar todos nuestros resultados al mismo bloque. ```javascript /// Gets a list of domain names from a single block using pagination @@ -129,4 +129,4 @@ async function getDomainNames() { } ``` -Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. +Ten en cuenta que en caso de reorganización, el cliente tendrá que reintentar desde la primera solicitud para actualizar el hash del bloque a un non-uncle bloque. From ba4fe2924caefd5978232c538b904e85cbd6e554 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:34 -0500 Subject: [PATCH 130/241] New translations create-subgraph-hosted.mdx (Arabic) --- pages/ar/developer/create-subgraph-hosted.mdx | 410 +++++++++--------- 1 file changed, 205 insertions(+), 205 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index ccb4432abba2..c93bf5f14604 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -1,10 +1,10 @@ --- -title: Create a Subgraph +title: إنشاء الـ Subgraph --- -Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. +قبل التمكن من استخدام Graph CLI ، يلزمك إنشاء الـ subgraph الخاص بك في [ Subgraph Studio ](https://thegraph.com/studio). ستتمكن بعد ذلك من إعداد مشروع الـ subgraph الخاص بك ونشره على المنصة الي تختارها. لاحظ أنه لن يتم نشر ** الـ subgraphs التي لا تقوم بفهرسة mainnet لإيثريوم على شبكة The Graph **. -The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. +يمكن استخدام الأمر `graph init` لإعداد مشروع subgraph جديد ، إما من عقد موجود على أي من شبكات Ethereum العامة ، أو من مثال subgraph. يمكن استخدام هذا الأمر لإنشاء subgraph في Subgraph Studio عن طريق تمرير `graph init --product subgraph-studio`. إذا كان لديك بالفعل عقد ذكي تم نشره على شبكة Ethereum mainnet أو إحدى شبكات testnets ، فإن تمهيد subgraph جديد من هذا العقد يمكن أن يكون طريقة جيدة للبدء. لكن أولا ، لنتحدث قليلا عن الشبكات التي يدعمها The Graph. ## الشبكات المدعومة @@ -12,7 +12,7 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: - `mainnet` -**Additional Networks are supported in beta on the Hosted Service**: +** يتم دعم الشبكات الإضافية في الإصدار beta على Hosted Service **: - `mainnet` - `kovan` @@ -44,13 +44,13 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: - `aurora` - `aurora-testnet` -The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. +يعتمد Graph's Hosted Service على استقرار وموثوقية التقنيات الأساسية ، وهي نقاط JSON RPC endpoints. المتوفرة. سيتم تمييز الشبكات الأحدث على أنها في مرحلة beta حتى تثبت الشبكة نفسها من حيث الاستقرار والموثوقية وقابلية التوسع. خلال هذه الفترة beta ، هناك خطر حدوث عطل وسلوك غير متوقع. -Remember that you will **not be able** to publish a subgraph that indexes a non-mainnet network to the decentralized Graph Network in [Subgraph Studio](/studio/subgraph-studio). +تذكر أنك ** لن تكون قادرا ** على نشر subgraph يفهرس شبكة non-mainnet لـ شبكة Graph اللامركزية في \[Subgraph Studio \](/ studio / subgraph-studio). -## From An Existing Contract +## من عقد موجود -The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. +الأمر التالي ينشئ subgraph يفهرس كل الأحداث للعقد الموجود. إنه يحاول جلب ABI للعقد من Etherscan ويعود إلى طلب مسار ملف محلي. إذا كانت أي من arguments الاختيارية مفقودة ، فسيأخذك عبر نموذج تفاعلي. ```sh graph init \ @@ -61,23 +61,23 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +`` هو ID لـ subgraph الخاص بك في Subgraph Studio ، ويمكن العثور عليه في صفحة تفاصيل الـ subgraph. -## From An Example Subgraph +## من مثال Subgraph -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +الوضع الثاني `graph init` يدعم إنشاء مشروع جديد من مثال subgraph. الأمر التالي يقوم بهذا: ``` graph init --studio ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +يعتمد مثال الـ subgraph على عقد Gravity بواسطة Dani Grant الذي يدير avatars للمستخدم ويصدر أحداث `NewGravatar` أو `UpdateGravatar` كلما تم إنشاء avatars أو تحديثها. يعالج الـ subgraph هذه الأحداث عن طريق كتابة كيانات `Gravatar` إلى مخزن Graph Node والتأكد من تحديثها وفقا للأحداث. ستنتقل الأقسام التالية إلى الملفات التي تشكل الـ subgraph manifest لهذا المثال. ## The Subgraph Manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +Subgraph manifest `subgraph.yaml` تحدد العقود الذكية لفهارس الـ subgraph الخاص بك ، والأحداث من هذه العقود التي يجب الانتباه إليها ، وكيفية عمل map لبيانات الأحداث للكيانات التي تخزنها Graph Node وتسمح بالاستعلام عنها. يمكن العثور على المواصفات الكاملة لـ subgraph manifests [ هنا ](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph, `subgraph.yaml` is: +بالنسبة لمثال الـ subgraph ،يكون الـ `subgraph.yaml`: ```yaml specVersion: 0.0.4 @@ -118,59 +118,59 @@ dataSources: file: ./src/mapping.ts ``` -The important entries to update for the manifest are: +الإدخالات الهامة لتحديث manifest هي: -- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. +- `description`: وصف يمكن قراءته لماهية الـ subgraph. يتم عرض هذا الوصف بواسطة Graph Explorer عند نشر الـ subgraph على الـ Hosted Service. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. +- `repository`: عنوان URL للمخزن حيث يمكن العثور على subgraph manifest. يتم أيضا عرض هذا بواسطة Graph Explorer. -- `features`: a list of all used [feature](#experimental-features) names. +- `features`: قائمة بجميع أسماء الـ [ الميزات](#experimental-features) المستخدمة. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: عنوان العقد الذكي ،و مصادر الـ subgraph ، و abi استخدام العقد الذكي. العنوان اختياري. وبحذفه يسمح بفهرسة الأحداث المطابقة من جميع العقود. -- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. +- `dataSources.source.startBlock`: الرقم الاختياري للكتلة والتي يبدأ مصدر البيانات بالفهرسة منها. في معظم الحالات نقترح استخدام الكتلة التي تم إنشاء العقد من خلالها. -- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. +- `dataSources.mapping.entities`: الكيانات التي يكتبها مصدر البيانات إلى المخزن. يتم تحديد مخطط كل كيان في ملف schema.graphql. -- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. +- `dataSources.mapping.abis`: ملف ABI واحد أو أكثر لعقد المصدر بالإضافة إلى أي عقود ذكية أخرى تتفاعل معها من داخل الـ mappings. - `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. - `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. بدون فلتر، سيتم تشغيل معالج الكتلة في كل كتلة. يمكن توفير فلتر اختياري مع الأنواع التالية: call`. سيعمل فلتر` call` على تشغيل المعالج إذا كانت الكتلة تحتوي على استدعاء واحد على الأقل لعقد مصدر البيانات. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +يمكن لـ subgraph واحد فهرسة البيانات من عقود ذكية متعددة. أضف إدخالا لكل عقد يجب فهرسة البيانات منه إلى مصفوفة `dataSources`. -The triggers for a data source within a block are ordered using the following process: +يتم ترتيب الـ triggers لمصدر البيانات داخل الكتلة باستخدام العملية التالية: -1. Event and call triggers are first ordered by transaction index within the block. -2. Event and call triggers with in the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. يتم ترتيب triggers الأحداث والاستدعاءات أولا من خلال فهرس الإجراء داخل الكتلة. +2. يتم ترتيب triggers الحدث والاستدعاء في نفس الإجراء باستخدام اصطلاح: يتم تفعيل مشغلات الحدث أولا ثم مشغلات الاستدعاء (event triggers first then call triggers) ، ويحترم كل نوع الترتيب المحدد في الـ manifest. +3. يتم تشغيل مشغلات الكتلة بعد مشغلات الحدث والاستدعاء، بالترتيب المحدد في الـ manifest. -These ordering rules are subject to change. +قواعد الترتيب هذه عرضة للتغيير. ### Getting The ABIs -The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: +يجب أن تتطابق ملف (ملفات) ABI مع العقد (العقود) الخاصة بك. هناك عدة طرق للحصول على ملفات ABI: -- If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or using solc to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. +- إذا كنت تقوم ببناء مشروعك الخاص ، فمن المحتمل أن تتمكن من الوصول إلى أحدث ABIs. +- إذا كنت تقوم ببناء subgraph لمشروع عام ، فيمكنك تنزيل هذا المشروع على جهاز الكمبيوتر الخاص بك والحصول على ABI باستخدام [ `truffle compile` ](https://truffleframework.com/docs/truffle/overview) أو استخدام solc للترجمة. +- يمكنك أيضا العثور على ABI على [ Etherscan ](https://etherscan.io/) ، ولكن هذا ليس موثوقا به دائما ، حيث قد يكون ABI الذي تم تحميله هناك قديما. تأكد من أن لديك ABI الصحيح ، وإلا فإن تشغيل الـ subgraph الخاص بك سيفشل. -## The GraphQL Schema +## مخطط GraphQL -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. +مخطط الـ subgraph الخاص بك موجود في الملف `schema.graphql`. يتم تعريف مخططات GraphQL باستخدام لغة تعريف واجهة GraphQL. إذا لم تكتب مخطط GraphQL مطلقا ، فمن المستحسن أن تقوم بمراجعة هذا التمهيد على نظام نوع GraphQL. يمكن العثور على الوثائق المرجعية لمخططات GraphQL في قسم [ GraphQL API ](/developer/graphql-api). -## Defining Entities +## تعريف الكيانات -Before defining entities, it is important to take a step back and think about how your data is structured and linked. All queries will be made against the data model defined in the subgraph schema and the entities indexed by the subgraph. Because of this, it is good to define the subgraph schema in a way that matches the needs of your dapp. It may be useful to imagine entities as "objects containing data", rather than as events or functions. +قبل تعريف الكيانات ، من المهم التراجع والتفكير في كيفية هيكلة بياناتك وربطها. سيتم إجراء جميع الاستعلامات لنموذج البيانات المعرفة في مخطط الـ subgraph والكيانات المفهرسة بواسطة الـ subgraph. لهذا السبب ، من الجيد تعريف مخطط الـ subgraph بطريقة تتوافق مع احتياجات الـ dapp الخاص بك. قد يكون من المفيد تصور الكيانات على أنها "كائنات تحتوي على بيانات" ، وليس أحداثا أو دوال. -With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. +بواسطة The Graph ، يمكنك ببساطة تحديد أنواع الكيانات في `schema.graphql` ، وسيقوم Graph Node بإنشاء حقول المستوى الأعلى للاستعلام عن الـ instances الفردية والمجموعات من هذا النوع من الكيانات. كل نوع يجب أن يكون كيانا يكون مطلوبا للتعليق عليه باستخدام التوجيه `entity`. -### Good Example +### مثال جيد -The `Gravatar` entity below is structured around a Gravatar object and is a good example of how an entity could be defined. +تم تنظيم الكيان `Gravatar` أدناه حول كائن Gravatar وهو مثال جيد لكيفية تعريف الكيان. ```graphql type Gravatar @entity { @@ -182,9 +182,9 @@ type Gravatar @entity { } ``` -### Bad Example +### مثال سيئ -The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. +يستند مثالان الكيانات أدناه `GravatarAccepted` و `GravatarDeclined` إلى أحداث. لا يوصى بعمل map الأحداث أو استدعاءات الدوال للكيانات 1: 1. ```graphql type GravatarAccepted @entity { @@ -202,35 +202,35 @@ type GravatarDeclined @entity { } ``` -### Optional and Required Fields +### الحقول الاختيارية والمطلوبة -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: +يمكن تعريف حقول الكيانات على أنها مطلوبة أو اختيارية. الحقول المطلوبة يشار إليها بواسطة `!` في المخطط. إذا لم يتم تعيين حقل مطلوب في الـ mapping ، فستتلقى هذا الخطأ عند الاستعلام عن الحقل: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. +يجب أن يكون لكل كيان حقل `id` ، وهو من النوع `ID!` (string). حقل `id` يقدم كمفتاح رئيسي ويجب أن يكون فريدا في كل الكيانات لنفس النوع. -### Built-In Scalar Types +### أنواع المقاييس المضمنة -#### GraphQL Supported Scalars +#### المقاييس المدعومة من GraphQL -We support the following scalars in our GraphQL API: +ندعم المقاييس التالية في GraphQL API الخاصة بنا: -| Type | Description | -| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| النوع | الوصف | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | +| `ID` | يتم تخزينه كـ `string`. | +| `String` | لقيم `string`. لا يتم دعم اNull ويتم إزالتها تلقائيا. | +| `Boolean` | لقيم `boolean`. | +| `Int` | GraphQL spec تعرف `Int` بحجم 32 بايت. | +| `BigInt` | أعداد صحيحة كبيرة. يستخدم لأنواع Ethereum `uint32` ، `int64` ، `uint64` ، ... ، `uint256`. ملاحظة: كل شيء تحت `uint32` ، مثل `int32` أو `uint24` أو `int8` يتم تمثيله كـ `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. يتراوح نطاق الأس من −6143 إلى +6144. مقربة إلى 34 رقما. | #### Enums -You can also create enums within a schema. Enums have the following syntax: +يمكنك أيضا إنشاء enums داخل مخطط. Enums لها البناء التالي: ```graphql enum TokenStatus { @@ -240,19 +240,19 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: +بمجرد تعريف الـ enum في المخطط ، يمكنك استخدام string لقيمة الـ enum لتعيين حقل الـ enum في الكيان. على سبيل المثال ، يمكنك تعيين `tokenStatus` إلى `SecondOwner` عن طريق تعريف الكيان أولا ثم تعيين الحقل بعد ذلك بـ `entity.tokenStatus = "SecondOwner`. يوضح المثال أدناه الشكل الذي سيبدو عليه كيان التوكن في حقل الـ enum: -More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). +يمكن العثور على مزيد من التفاصيل حول كتابة الـ enums في [GraphQL documentation](https://graphql.org/learn/schema/). -#### Entity Relationships +#### علاقات الكيانات -An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. +قد يكون للكيان علاقة بواحد أو أكثر من الكيانات الأخرى في مخططك. قد يتم اجتياز هذه العلاقات في استعلاماتك. العلاقات في The Graph تكون أحادية الاتجاه. من الممكن محاكاة العلاقات ثنائية الاتجاه من خلال تعريف علاقة أحادية الاتجاه على "طرفي" العلاقة. -Relationships are defined on entities just like any other field except that the type specified is that of another entity. +يتم تعريف العلاقات على الكيانات تماما مثل أي حقل آخر عدا أن النوع المحدد هو كيان آخر. -#### One-To-One Relationships +#### العلاقات واحد لواحد -Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: +عرف نوع كيان `Transaction` بعلاقة فردية اختيارية مع نوع كيان `TransactionReceipt`: ```graphql type Transaction @entity { @@ -266,9 +266,9 @@ type TransactionReceipt @entity { } ``` -#### One-To-Many Relationships +#### علاقات واحد لمتعدد -Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: +عرف نوع كيان `TokenBalance` بعلاقة واحد لمتعدد المطلوبة مع نوع كيان Token: ```graphql type Token @entity { @@ -282,15 +282,15 @@ type TokenBalance @entity { } ``` -#### Reverse Lookups +#### البحث العكسي -Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. +يمكن تعريف البحث العكسي لكيان من خلال الحقل `derivedFrom`. يؤدي هذا إلى إنشاء حقل افتراضي للكيان الذي قد يتم الاستعلام عنه ولكن لا يمكن تعيينه يدويا من خلال الـ mappings API. بالأحرى، هو مشتق من العلاقة المعرفة للكيان الآخر. بالنسبة لمثل هذه العلاقات ، نادرا ما يكون من المنطقي تخزين جانبي العلاقة ، وسيكون أداء الفهرسة والاستعلام أفضل عندما يتم تخزين جانب واحد فقط ويتم اشتقاق الجانب الآخر. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +بالنسبة لعلاقات واحد_لمتعدد ، يجب دائما تخزين العلاقة في جانب "واحد" ، ويجب دائما اشتقاق جانب "المتعدد". سيؤدي تخزين العلاقة بهذه الطريقة ، بدلا من تخزين مجموعة من الكيانات على الجانب "متعدد" ، إلى أداء أفضل بشكل كبير لكل من فهرسة واستعلام الـ subgraph. بشكل عام ، يجب تجنب تخزين مصفوفات الكيانات. -#### Example +#### مثال -We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: +يمكننا إنشاء أرصدة لتوكن يمكن الوصول إليه من التوكن عن طريق اشتقاق حقل `tokenBalances`: ```graphql type Token @entity { @@ -305,13 +305,13 @@ type TokenBalance @entity { } ``` -#### Many-To-Many Relationships +#### علاقات متعدد_لمتعدد -For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. +بالنسبة لعلاقات متعدد_لمتعدد ، مثل المستخدمين الذين قد ينتمي كل منهم إلى عدد من المؤسسات ، فإن الطريقة الأكثر وضوحا ، ولكنها ليست الأكثر أداء بشكل عام ، طريقة لنمذجة العلاقة كمصفوفة في كل من الكيانين المعنيين. إذا كانت العلاقة متماثلة ، فيجب تخزين جانب واحد فقط من العلاقة ويمكن اشتقاق الجانب الآخر. -#### Example +#### مثال -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +عرف البحث العكسي من نوع كيان `User` إلى نوع كيان `Organization`. في المثال أدناه ، يتم تحقيق ذلك من خلال البحث عن خاصية`members` من داخل كيان `Organization`. في الاستعلامات ، سيتم حل حقل `organizations` في `User` من خلال البحث عن جميع كيانات `Organization` التي تتضمن ID المستخدم. ```graphql type Organization @entity { @@ -327,7 +327,7 @@ type User @entity { } ``` -A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like +هناك طريقة أكثر فاعلية لتخزين هذه العلاقة وهي من خلال جدول mapping يحتوي على إدخال واحد لكل زوج `User` / `Organization` بمخطط مثل ```graphql type Organization @entity { @@ -349,7 +349,7 @@ type UserOrganization @entity { } ``` -This approach requires that queries descend into one additional level to retrieve, for example, the organizations for users: +يتطلب هذا الأسلوب أن تنحدر الاستعلامات إلى مستوى إضافي واحد لاستردادها ، على سبيل المثال ، المؤسسات للمستخدمين: ```graphql query usersWithOrganizations { @@ -364,11 +364,11 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +هذه الطريقة الأكثر إتقانا لتخزين علاقات متعدد_لمتعدد ستؤدي إلى بيانات مخزنة أقل للـ subgraph، وبالتالي غالبا إلى subgraph ما يكون أسرع في الفهرسة والاستعلام. -#### Adding comments to the schema +#### إضافة تعليقات إلى المخطط -As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: +وفقا لمواصفات GraphQL ، يمكن إضافة التعليقات فوق خاصيات كيان المخطط باستخدام الاقتباسات المزدوجة `""`. هذا موضح في المثال أدناه: ```graphql type MyFirstEntity @entity { @@ -378,13 +378,13 @@ type MyFirstEntity @entity { } ``` -## Defining Fulltext Search Fields +## تعريف حقول البحث عن النص الكامل -Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. +استعلامات بحث النص الكامل تقوم بفلترة وترتيب الكيانات بناء على إدخال نص البحث. استعلامات النص الكامل قادرة على إرجاع التطابقات للكلمات المتشابهة عن طريق معالجة إدخال نص الاستعلام إلى الـ stems قبل مقارنة ببيانات النص المفهرس. -A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. +تعريف استعلام النص الكامل يتضمن اسم الاستعلام وقاموس اللغة المستخدم لمعالجة حقول النص وخوارزمية الترتيب المستخدمة لترتيب النتائج والحقول المضمنة في البحث. كل استعلام نص كامل قد يمتد إلى عدة حقول ، ولكن يجب أن تكون جميع الحقول المضمنة من نوع كيان واحد. -To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. +لإضافة استعلام نص كامل ، قم بتضمين نوع `_Schema_` مع نص كامل موجه في مخطط GraphQL. ```graphql type _Schema_ @@ -407,7 +407,7 @@ type Band @entity { } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. +يمكن استخدام حقل المثال `bandSearch` في الاستعلامات لفلترة كيانات `Band` استنادا إلى المستندات النصية في الـ `name` ، `description` و `bio`. انتقل إلى [GraphQL API - Queries](/developer/graphql-api#queries) للحصول على وصف لـ API بحث النص الكامل ولمزيد من الأمثلة المستخدمة. ```graphql query { @@ -420,49 +420,49 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> ** [ إدارة الميزات ](#experimental-features): ** من `specVersion` `0.0.4` وما بعده ، يجب الإعلان عن `fullTextSearch` ضمن قسم `features` في the subgraph manifest. -### Languages supported +### اللغات المدعومة -Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". +اختيار لغة مختلفة سيكون له تأثير نهائي ، على الرغم من دقتها في بعض الأحيان ، إلا أنها تؤثر على API بحث النص الكامل. يتم فحص الحقول التي يغطيها حقل استعلام نص_كامل في سياق اللغة المختارة ، وبالتالي فإن المفردات الناتجة عن التحليل واستعلامات البحث تختلف من لغة إلى لغة. على سبيل المثال: عند استخدام القاموس التركي المدعوم ، فإن "token" ينشأ من "toke" بينما قاموس اللغة الإنجليزية سيشتقها إلى "token". -Supported language dictionaries: +قواميس اللغة المدعومة: -| Code | Dictionary | -| ------ | ---------- | -| simple | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | Portugese | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| الرمز | القاموس | +| ------ | ------- | +| simple | عام | +| da | دنماركي | +| nl | هولندي | +| en | إنجليزي | +| fi | فنلندي | +| fr | فرنسي | +| de | ألماني | +| hu | مجري | +| it | إيطالي | +| no | نرويجي | +| pt | برتغالي | +| ro | روماني | +| ru | روسي | +| es | إسباني | +| sv | سويدي | +| tr | تركي | -### Ranking Algorithms +### خوارزميات التصنيف -Supported algorithms for ordering results: +الخوارزميات المدعومة لترتيب النتائج: -| Algorithm | Description | -| ------------- | ----------------------------------------------------------------------- | -| rank | Use the match quality (0-1) of the fulltext query to order the results. | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| الخوارزمية | الوصف | +| ------------- | ------------------------------------------------------------ | +| rank | استخدم جودة مطابقة استعلام النص-الكامل (0-1) لترتيب النتائج. | +| proximityRank | مشابه لـ rank ولكنه يشمل أيضا القرب من المطابقات. | -## Writing Mappings +## كتابة الـ Mappings -The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. +The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. تتم كتابة الـ Mappings في مجموعة فرعية من [ TypeScript ](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) تسمى \[AssemblyScript \](https: //github.com/AssemblyScript/assemblyscript/wiki) والتي يمكن ترجمتها إلى WASM ([ WebAssembly ](https://webassembly.org/)). يعتبر AssemblyScript أكثر صرامة من TypeScript العادي ، ولكنه يوفر تركيبا مألوفا. -For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. +لكل معالج حدث تم تعريفه في `subgraph.yaml` ضمن `mapping.eventHandlers` ، قم بإنشاء دالة صادرة بنفس الاسم. يجب أن يقبل كل معالج بارمترا واحدا يسمى `event` بنوع مطابق لاسم الحدث الذي تتم معالجته. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +في مثال الـ subgraph ، يحتوي `src / mapping.ts` على معالجات لأحداث `NewGravatar` و `UpdatedGravatar`: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -489,31 +489,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. +يأخذ المعالج الأول حدث `NewGravatar` وينشئ كيان `Gravatar` جديد بـ `new Gravatar (event.params.id.toHex ())` ،مالئا حقول الكيان باستخدام بارامترات الحدث المقابلة. يتم تمثيل instance الكيان بالمتغير `gravatar` ، مع قيمة معرف `()event.params.id.toHex`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. +يحاول المعالج الثاني تحميل `Gravatar` الموجود من مخزن Graph Node. إذا لم يكن موجودا بعد ، فإنه يتم إنشاؤه عند الطلب. يتم بعد ذلك تحديث الكيان لمطابقة بارامترات الحدث الجديدة ، قبل حفظه مرة أخرى في المخزن باستخدام `()gravatar.save`. -### Recommended IDs for Creating New Entities +### الـ IDs الموصى بها لإنشاء كيانات جديدة -Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. +يجب أن يكون لكل كيان `id` فريدا بين جميع الكيانات من نفس النوع. يتم تعيين قيمة `id` للكيان عند إنشاء الكيان. فيما يلي بعض قيم `id` الموصى بها التي يجب مراعاتها عند إنشاء كيانات جديدة. ملاحظة: قيمة `id`يجب أن تكون `string`. - `event.params.id.toHex()` - `event.transaction.from.toHex()` - `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` -We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. +نحن نقدم [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) التي تحتوي على أدوات مساعدة للتفاعل مع مخزن Graph Node وملائمة للتعامل مع بيانات العقد الذكي والكيانات. يمكنك استخدام هذه المكتبة في mappings الخاص بك عن طريق استيراد `graphprotocol/graph-ts` in `mapping.ts@`. -## Code Generation +## توليد الكود -In order to make working smart contracts, events and entities easy and type-safe, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +من أجل جعل العقود الذكية والأحداث والكيانات سهلة وآمنة ، يمكن لـ Graph CLI إنشاء أنواع AssemblyScript من مخطط subgraph's GraphQL وعقد الـ ABIs المضمنة في مصادر البيانات. -This is done with +يتم ذلك بـ ```sh graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +ولكن في معظم الحالات ، تكون الـ subgraphs مهيأة مسبقا بالفعل عبر `package.json` للسماح لك ببساطة بتشغيل واحد مما يلي لتحقيق نفس الشيء: ```sh # Yarn @@ -523,7 +523,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with +سيؤدي هذا إلى إنشاء فئة AssemblyScript لكل عقد ذكي في ملفات ABI المذكورة في `subgraph.yaml` ، مما يسمح لك بربط هذه العقود بعناوين محددة في الـ mappings واستدعاء methods العقد للكتلة التي تتم معالجتها. وستنشئ أيضا فئة لكل حدث للعقد لتوفير وصول سهل إلى بارامترات الحدث بالإضافة إلى الكتلة والإجراء التي نشأ منها الحدث. كل هذه الأنواع تكتب إلى `//.ts`. في مثال الـ subgraph ، سيكون هذا `generated/Gravity/Gravity.ts`,مما يسمح للـ mappings باستيراد هذه الأنواع باستخدام ```javascript import { @@ -535,23 +535,23 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +بالإضافة إلى ذلك ، يتم إنشاء فئة واحدة لكل نوع كيان في مخطط الـ subgraph's GraphQL. توفر هذه الفئات إمكانية تحميل كيان نوغ آمن والقراءة والكتابة إلى حقول الكيان بالإضافة إلى `save()` method لكتابة الكيانات للمخزن. تمت كتابة جميع فئات الكيانات إلى `/schema.ts`, مما يسمح للـ mappings باستيرادها باستخدام ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** يجب إجراء إنشاء الكود مرة أخرى بعد كل تغيير في مخطط GraphQL أو ABI المضمنة في الـ manifest. يجب أيضا إجراؤه مرة واحدة على الأقل قبل بناء أو نشر الـ subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +إنشاء الكود لا يتحقق من كود الـ mapping الخاص بك في `src/mapping.ts`. إذا كنت تريد التحقق من ذلك قبل محاولة نشر الـ subgraph الخاص بك في Graph Explorer ، فيمكنك تشغيل `yarn build` وإصلاح أي أخطاء في تركيب الجملة التي قد يعثر عليها المترجم TypeScript. -## Data Source Templates +## قوالب مصدر البيانات -A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +النمط الشائع في عقود Ethereum الذكية هو استخدام عقود السجل أو المصنع ، حيث أحد العقود ينشئ أو يدير أو يشير إلى عدد اعتباطي من العقود الأخرى التي لكل منها حالتها وأحداثها الخاصة. عناوين هذه العقود الفرعية قد تكون أو لا تكون معروفة مقدما وقد يتم إنشاء و / أو إضافة العديد من هذه العقود بمرور الوقت. هذا هو السبب في أنه في مثل هذه الحالات ، يكون تعريف مصدر بيانات واحد أو عدد ثابت من مصادر البيانات أمرا مستحيلا ويلزم اتباع نهج أكثر ديناميكية: _قوالب مصدر البيانات_. -### Data Source for the Main Contract +### مصدر البيانات للعقد الرئيسي -First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. +أولاً ، تقوم بتعريف مصدر بيانات منتظم للعقد الرئيسي. يُظهر المقتطف أدناه مثالا مبسطا لمصدر البيانات لعقد تبادل[ Uniswap ](https://uniswap.io). لاحظ معالج الحدث `NewExchange(address,address)`. يتم اصدار هذا عندما يتم إنشاء عقد تبادل جديد على السلسلة بواسطة عقد المصنع. ```yaml dataSources: @@ -576,9 +576,9 @@ dataSources: handler: handleNewExchange ``` -### Data Source Templates for Dynamically Created Contracts +### قوالب مصدر البيانات للعقود التي تم إنشاؤها ديناميكيا -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +بعد ذلك ، أضف _ قوالب مصدر البيانات _ إلى الـ manifest. وهي متطابقة مع مصادر البيانات العادية ، باستثناء أنها تفتقر إلى عنوان عقد معرف مسبقا تحت `source`. عادة ، يمكنك تعريف قالب واحد لكل نوع من أنواع العقود الفرعية المدارة أو المشار إليها بواسطة العقد الأصلي. ```yaml dataSources: @@ -612,9 +612,9 @@ templates: handler: handleRemoveLiquidity ``` -### Instantiating a Data Source Template +### إنشاء قالب مصدر البيانات -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +في الخطوة الأخيرة ، تقوم بتحديث mapping عقدك الرئيسي لإنشاء instance لمصدر بيانات ديناميكي من أحد القوالب. في هذا المثال ، يمكنك تغيير mapping العقد الرئيسي لاستيراد قالب `Exchange` واستدعاء method الـ`Exchange.create(address)` لبدء فهرسة عقد التبادل الجديد. ```typescript import { Exchange } from '../generated/templates' @@ -626,13 +626,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> ** ملاحظة: ** مصدر البيانات الجديد سيعالج فقط الاستدعاءات والأحداث للكتلة التي تم إنشاؤها فيه وجميع الكتل التالية ، ولكنه لن يعالج البيانات التاريخية ، أي البيانات الموجودة في الكتل السابقة. > -> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. +> إذا كانت الكتل السابقة تحتوي على بيانات ذات صلة بمصدر البيانات الجديد ، فمن الأفضل فهرسة تلك البيانات من خلال قراءة الحالة الحالية للعقد وإنشاء كيانات تمثل تلك الحالة في وقت إنشاء مصدر البيانات الجديد. -### Data Source Context +### سياق مصدر البيانات -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +تسمح سياقات مصدر البيانات بتمرير تكوين إضافي عند عمل instantiating للقالب. في مثالنا ، لنفترض أن التبادلات مرتبطة بزوج تداول معين ، والذي تم تضمينه في حدث `NewExchange`. That information can be passed into the instantiated data source, like so: ```typescript import { Exchange } from '../generated/templates' @@ -644,7 +644,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -Inside a mapping of the `Exchange` template, the context can then be accessed: +داخل mapping قالب `Exchange` ، يمكن الوصول إلى السياق بعد ذلك: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -653,11 +653,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -There are setters and getters like `setString` and `getString` for all value types. +هناك setters و getters مثل `setString` و `getString` لجميع أنواع القيم. ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +يعد `startBlock` إعدادا اختياريا يسمح لك بتحديد كتلة في السلسلة والتي سيبدأ مصدر البيانات بالفهرسة. تعيين كتلة البدء يسمح لمصدر البيانات بتخطي الملايين من الكتل التي ربما ليست ذات صلة. عادةً ما يقوم مطور الرسم البياني الفرعي بتعيين `startBlock` إلى الكتلة التي تم فيها إنشاء العقد الذكي لمصدر البيانات. ```yaml dataSources: @@ -683,23 +683,23 @@ dataSources: handler: handleNewEvent ``` -> **Note:** The contract creation block can be quickly looked up on Etherscan: +> ** ملاحظة: ** يمكن البحث عن كتلة إنشاء العقد بسرعة على Etherscan: > -> 1. Search for the contract by entering its address in the search bar. -> 2. Click on the creation transaction hash in the `Contract Creator` section. -> 3. Load the transaction details page where you'll find the start block for that contract. +> 1. ابحث عن العقد بإدخال عنوانه في شريط البحث. +> 2. انقر فوق hash إجراء الإنشاء في قسم `Contract Creator`. +> 3. قم بتحميل صفحة تفاصيل الإجراء حيث ستجد كتلة البدء لذلك العقد. -## Call Handlers +## معالجات الاستدعاء -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +بينما توفر الأحداث طريقة فعالة لجمع التغييرات ذات الصلة بحالة العقد ، تتجنب العديد من العقود إنشاء سجلات لتحسين تكاليف الغاز. في هذه الحالات ، يمكن لـ subgraph الاشتراك في الاستدعاء الذي يتم إجراؤه على عقد مصدر البيانات. يتم تحقيق ذلك من خلال تعريف معالجات الاستدعاء التي تشير إلى signature الدالة ومعالج الـ mapping الذي سيعالج الاستدعاءات لهذه الدالة. لمعالجة هذه المكالمات ، سيتلقى معالج الـ mapping الـ`ethereum.Call` كـ argument مع المدخلات المكتوبة والمخرجات من الاستدعاء. ستؤدي الاستدعاءات التي يتم إجراؤها على أي عمق في سلسلة استدعاء الاجراء إلى تشغيل الـ mapping، مما يسمح بالتقاط النشاط مع عقد مصدر البيانات من خلال عقود الـ proxy. -Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. +لن يتم تشغيل معالجات الاستدعاء إلا في إحدى الحالتين: عندما يتم استدعاء الدالة المحددة بواسطة حساب آخر غير العقد نفسه أو عندما يتم تمييزها على أنها خارجية في Solidity ويتم استدعاؤها كجزء من دالة أخرى في نفس العقد. -> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. +> ** ملاحظة: ** معالجات الاستدعاء غير مدعومة في Rinkeby أو Goerli أو Ganache. تعتمد معالجات الاستدعاء حاليا على Parity tracing API و هذه الشبكات لا تدعمها. -### Defining a Call Handler +### تعريف معالج الاستدعاء -To define a call handler in your manifest simply add a `callHandlers` array under the data source you would like to subscribe to. +لتعريف معالج استدعاء في الـ manifest الخاص بك ، ما عليك سوى إضافة مصفوفة `callHandlers` أسفل مصدر البيانات الذي ترغب في الاشتراك فيه. ```yaml dataSources: @@ -724,11 +724,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +الـ `function` هي توقيع الدالة المعياري لفلترة الاستدعاءات من خلالها. خاصية `handler` هي اسم الدالة في الـ mapping الذي ترغب في تنفيذه عندما يتم استدعاء الدالة المستهدفة في عقد مصدر البيانات. -### Mapping Function +### دالة الـ Mapping -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +كل معالج استدعاء يأخذ بارامترا واحدا له نوع يتوافق مع اسم الدالة التي تم استدعاؤها. في مثال الـ subgraph أعلاه ، يحتوي الـ mapping على معالج عندما يتم استدعاء الدالة `createGravatar` ويتلقى البارامتر `CreateGravatarCall` كـ argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -743,22 +743,22 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +الدالة `handleCreateGravatar` تأخذ `CreateGravatarCall` جديد وهو فئة فرعية من`ethereum.Call`, ، مقدم بواسطة `graphprotocol/graph-ts@`, والذي يتضمن المدخلات والمخرجات المكتوبة للاستدعاء. يتم إنشاء النوع `CreateGravatarCall` من أجلك عندما تشغل`graph codegen`. -## Block Handlers +## معالجات الكتلة -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. +بالإضافة إلى الاشتراك في أحداث العقد أو استدعاءات الدوال، قد يرغب الـ subgraph في تحديث بياناته عند إلحاق كتل جديدة بالسلسلة. لتحقيق ذلك ، يمكن لـ subgraph تشغيل دالة بعد كل كتلة أو بعد الكتل التي تطابق فلترا معرفا مسبقا. -### Supported Filters +### الفلاتر المدعومة ```yaml filter: kind: call ``` -_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ +_سيتم استدعاء المعالج المعرف مرة واحدة لكل كتلة تحتوي على استدعاء للعقد (مصدر البيانات) الذي تم تعريف المعالج ضمنه._ -The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. +عدم وجود فلتر لمعالج الكتلة سيضمن أن المعالج يتم استدعاؤه في كل كتلة. يمكن أن يحتوي مصدر البيانات على معالج كتلة واحد فقط لكل نوع فلتر. ```yaml dataSources: @@ -785,9 +785,9 @@ dataSources: kind: call ``` -### Mapping Function +### دالة الـ Mapping -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +دالة الـ mapping ستتلقى `ethereum.Block` كوسيطتها الوحيدة. مثل دوال الـ mapping للأحداث ، يمكن لهذه الدالة الوصول إلى كيانات الـ subgraph الموجودة في المخزن، واستدعاء العقود الذكية وإنشاء الكيانات أو تحديثها. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -799,9 +799,9 @@ export function handleBlock(block: ethereum.Block): void { } ``` -## Anonymous Events +## أحداث الـ Anonymous -If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: +إذا كنت بحاجة إلى معالجة أحداث anonymous في Solidity ، فيمكن تحقيق ذلك من خلال توفير الموضوع 0 للحدث ، كما في المثال: ```yaml eventHandlers: @@ -810,20 +810,20 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +سيتم تشغيل حدث فقط عندما يتطابق كل من التوقيع والموضوع 0. بشكل افتراضي ، `topic0` يساوي hash توقيع الحدث. -## Experimental features +## الميزات التجريبية -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +بدءًا من `specVersion` `0.0.4` ، يجب الإعلان صراحة عن ميزات الـ subgraph في قسم `features` في المستوى العلوي من ملف الـ manifest ، باستخدام اسم `camelCase` الخاص بهم ، كما هو موضح في الجدول أدناه: -| Feature | Name | -| --------------------------------------------------------- | ------------------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -| [IPFS on Ethereum Contracts](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | +| الميزة | الاسم | +| ----------------------------------------------------- | ------------------------- | +| [أخطاء غير فادحة](#non-fatal-errors) | `nonFatalErrors` | +| [البحث عن نص كامل](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| [IPFS على عقود Ethereum](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +على سبيل المثال ، إذا كان الـ subgraph يستخدم ** بحث النص الكامل ** و ** أخطاء غير فادحة ** ، فإن حقل `features` في الـ manifest يجب أن يكون: ```yaml specVersion: 0.0.4 @@ -834,27 +834,27 @@ features: dataSources: ... ``` -Note that using a feature without declaring it will incur in a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +لاحظ أن استخدام ميزة دون الإعلان عنها سيؤدي إلى حدوث ** خطأ تحقق من الصحة ** أثناء نشر الـ subgraph ، ولكن لن تحدث أخطاء إذا تم الإعلان عن الميزة ولكن لم يتم استخدامها. -### IPFS on Ethereum Contracts +### IPFS على عقود Ethereum -A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. +حالة الاستخدام الشائعة لدمج IPFS مع Ethereum هي تخزين البيانات على IPFS التي ستكون مكلفة للغاية للحفاظ عليها في السلسلة ، والإشارة إلى IPFS hash في عقود Ethereum. -Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +بالنظر إلى IPFS hashes هذه ، يمكن لـ subgraphs قراءة الملفات المقابلة من IPFS باستخدام `ipfs.cat` و `ipfs.map`. للقيام بذلك بشكل موثوق ، من الضروري أن يتم تثبيت هذه الملفات على عقدة IPFS التي تتصل بها Graph Node التي تقوم بفهرسة الـ subgraph. في حالة [hosted service](https://thegraph.com/hosted-service),يكون هذا [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). -> **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> ** ملاحظة: ** لا تدعم شبكة Graph حتى الآن `ipfs.cat` و `ipfs.map` ، ويجب على المطورين عدم النشر الـ subgraphs للشبكة باستخدام تلك الوظيفة عبر الـ Studio. -In order to make this easy for subgraph developers, The Graph team wrote a tool for transfering files from one IPFS node to another, called [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). +من أجل تسهيل ذلك على مطوري الـ subgraph ، فريق Graph كتب أداة لنقل الملفات من عقدة IPFS إلى أخرى ، تسمى [ ipfs-sync ](https://github.com/graphprotocol/ipfs-sync). -> **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. +> **[إدارة الميزات](#experimental-features):** يجب الإعلان عن `ipfsOnEthereumContracts` ضمن `features` في subgraph manifest. -### Non-fatal errors +### أخطاء غير فادحة -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. +افتراضيا ستؤدي أخطاء الفهرسة في الـ subgraphs التي تمت مزامنتها بالفعل ، إلى فشل الـ subgraph وإيقاف المزامنة. يمكن بدلا من ذلك تكوين الـ Subgraphs لمواصلة المزامنة في حالة وجود أخطاء ، عن طريق تجاهل التغييرات التي أجراها المعالج والتي تسببت في حدوث الخطأ. يمنح هذا منشئوا الـ subgraph الوقت لتصحيح الـ subgraphs الخاصة بهم بينما يستمر تقديم الاستعلامات للكتلة الأخيرة ، على الرغم من أن النتائج قد تكون متعارضة بسبب الخطأ الذي تسبب في الخطأ. لاحظ أن بعض الأخطاء لا تزال كارثية دائما ، ولكي تكون غير فادحة ، يجب أن يُعرف الخطأ بأنه حتمي. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> ** ملاحظة: ** لا تدعم شبكة Graph حتى الآن الأخطاء غير الفادحة ، ويجب على المطورين عدم نشر الـ subgraphs على الشبكة باستخدام تلك الوظيفة عبر الـ Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +يتطلب تمكين الأخطاء غير الفادحة تعيين flag الميزة في subgraph manifest كالتالي: ```yaml specVersion: 0.0.4 @@ -864,7 +864,7 @@ features: ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +يجب أن يتضمن الاستعلام أيضا الاستعلام عن البيانات ذات التناقضات المحتملة من خلال الوسيطة `subgraphError`. يوصى أيضا بالاستعلام عن `_meta` للتحقق مما إذا كان الـ subgraph قد تخطى الأخطاء ، كما في المثال: ```graphql foos(first: 100, subgraphError: allow) { @@ -876,7 +876,7 @@ _meta { } ``` -If the subgraph encounters an error that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +إذا واجه الـ subgraph خطأ فسيرجع هذا الاستعلام كلا من البيانات وخطأ الـ graphql ضمن رسالة `"indexing_error"` ، كما في مثال الاستجابة هذا: ```graphql "data": { @@ -898,11 +898,11 @@ If the subgraph encounters an error that query will return both the data and a g ### Grafting onto Existing Subgraphs -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. +عندما يتم نشر الـ subgraph لأول مرة ، فإنه يبدأ في فهرسة الأحداث من كتلة نشوء السلسلة المتوافقة (أو من `startBlock` المعرفة مع كل مصدر بيانات) في بعض الحالات ، يكون من المفيد إعادة استخدام البيانات من subgraph موجود وبدء الفهرسة من كتلة لاحقة. يسمى هذا الوضع من الفهرسة بـ _Grafting_. Grafting ، على سبيل المثال ، مفيد أثناء التطوير لتجاوز الأخطاء البسيطة بسرعة في الـ mappings ، أو للحصول مؤقتا على subgraph موجود يعمل مرة أخرى بعد فشله. -> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> ** ملاحظة: ** الـ Grafting يتطلب أن المفهرس قد فهرس الـ subgraph الأساسي. لا يوصى باستخدامه على شبكة The Graph في الوقت الحالي ، ولا ينبغي للمطورين نشر الـ subgraphs على الشبكة باستخدام تلك الوظيفة عبر الـ Studio. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: +يتم عمل Grafte لـ subgraph في الـ subgraph الأساسي عندما يحتوي الـ subgraph manifest في `subgraph.yaml` على كتلة `graft` في المستوى العلوي: ```yaml description: ... @@ -911,18 +911,18 @@ graft: block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +عندما يتم نشر subgraph يحتوي الـ manifest على كتلة `graft` ، فإن Graph Node سوف تنسخ بيانات `base` subgraph بما في ذلك الـ `block` المعطى ثم يتابع فهرسة الـ subgraph الجديد من تلك الكتلة. يجب أن يوجد الـ subgraph الأساسي في instance الـ Graph Node المستهدف ويجب أن يكون قد تمت فهرسته حتى الكتلة المحددة على الأقل. بسبب هذا التقييد ، يجب استخدام الـ grafting فقط أثناء التطوير أو أثناء الطوارئ لتسريع إنتاج non-grafted subgraph مكافئ. -Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. أثناء تهيئة الـ grafted subgraph ، سيقوم الـ Graph Node بتسجيل المعلومات حول أنواع الكيانات التي تم نسخها بالفعل. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: +يمكن أن يستخدم الـ grafted subgraph مخطط GraphQL غير مطابق لمخطط الـ subgraph الأساسي ، ولكنه متوافق معه. يجب أن يكون مخطط الـ subgraph صالحا في حد ذاته ولكنه قد ينحرف عن مخطط الـ subgraph الأساسي بالطرق التالية: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces +- يضيف أو يزيل أنواع الكيانات +- يزيل الصفات من أنواع الكيانات +- يضيف صفات nullable لأنواع الكيانات +- يحول صفات non-nullable إلى صفات nullable +- يضيف قيما إلى enums +- يضيف أو يزيل الواجهات - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[إدارة الميزات](#experimental-features):**يجب الإعلان عن `التطعيم` ضمن `features` في subgraph manifest. From 54a4edb51af1861dfb5c6c28d225bf3e0f650c81 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:35 -0500 Subject: [PATCH 131/241] New translations introduction.mdx (Arabic) --- pages/ar/about/introduction.mdx | 48 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/ar/about/introduction.mdx b/pages/ar/about/introduction.mdx index 5f840c040400..3c6eefd5e586 100644 --- a/pages/ar/about/introduction.mdx +++ b/pages/ar/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: Introduction +title: مقدمة --- -This page will explain what The Graph is and how you can get started. +هذه الصفحة ستشرح The Graph وكيف يمكنك أن تبدأ. -## What The Graph Is +## ما هو The Graph -The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. +The Graph هو بروتوكول لامركزي وذلك لفهرسة البيانات والاستعلام عنها من blockchains ، بدءًا من Ethereum. حيث يمكننا من الاستعلام عن البيانات والتي من الصعب الاستعلام عنها بشكل مباشر. -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +المشاريع ذات العقود الذكية المعقدة مثل [ Uniswap ](https://uniswap.org/) و NFTs مثل [ Bored Ape Yacht Club ](https://boredapeyachtclub.com/) تقوم بتخزين البيانات على Ethereum blockchain ، مما يجعل من الصعب قراءة أي شيء بشكل مباشر عدا البيانات الأساسية من blockchain. -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +في حالة Bored Ape Yacht Club ، يمكننا إجراء قراءات أساسية على [ العقد ](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) مثل الحصول على مالك Ape معين ،أو الحصول على محتوى URI لـ Ape وذلك بناء على ال ID الخاص به، أو إجمالي العرض ، حيث تتم برمجة عمليات القراءة هذه بشكل مباشر في العقد الذكي ، ولكن في العالم الحقيقي هناك استعلامات وعمليات أكثر تقدمًا غير ممكنة مثل التجميع والبحث والعلاقات والفلترة الغير بسيطة. فمثلا، إذا أردنا الاستعلام عن Apes مملوكة لعنوان معين ،وفلترته حسب إحدى خصائصه، فلن نتمكن من الحصول على تلك المعلومات من خلال التفاعل بشكل مباشر مع العقد نفسه. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. +للحصول على هذه البيانات، يجب معالجة كل [`التحويلات`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) التي حدثت، وقراءة البيانات الوصفية من IPFS باستخدام Token ID و IPFS hash، ومن ثم تجميعه. حتى بالنسبة لهذه الأنواع من الأسئلة البسيطة نسبيا ، قد يستغرق الأمر ** ساعات أو حتى أيام ** لتطبيق لامركزي (dapp) يعمل في متصفح للحصول على إجابة. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +يمكنك أيضا إنشاء الخادم الخاص بك ، ومعالجة الإجراءات هناك ، وحفظها في قاعدة بيانات ، والقيام ببناء API endpoint من أجل الاستعلام عن البيانات. ومع ذلك ، فإن هذا الخيار يتطلب موارد كثيرة ، ويحتاج إلى صيانة ، ويقدم نقطة فشل واحدة ، ويكسر خصائص الأمان الهامة المطلوبة لتحقيق اللامركزية. -**Indexing blockchain data is really, really hard.** +**إن فهرسة بيانات الـ blockchain أمر صعب.** -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. +خصائص الـ Blockchain مثل finality أو chain reorganizations أو uncled blocks تعقد هذه العملية بشكل أكبر ، ولن تجعلها مضيعة للوقت فحسب ، بل أيضا تجعلها من الصعب من الناحية النظرية جلب نتائج الاستعلام الصحيحة من بيانات الـ blockchain. -The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +يقوم The Graph بحل هذا الأمر من خلال بروتوكول لامركزي والذي يقوم بفهرسة والاستعلام عن بيانات الـ blockchain بكفاءة عالية. حيث يمكن بعد ذلك الاستعلام عن APIs (الـ "subgraphs" المفهرسة) باستخدام GraphQL API قياسية. اليوم ، هناك خدمة مستضافة بالإضافة إلى بروتوكول لامركزي بنفس القدرات. كلاهما مدعوم بتطبيق مفتوح المصدر لـ [ Graph Node ](https://github.com/graphprotocol/graph-node). -## How The Graph Works +## كيف يعمل The Graph -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +The Graph يفهرس بيانات Ethereumالـ بناء على أوصاف الـ subgraph ، والمعروفة باسم subgraph manifest. حيث أن وصف الـ subgraph يحدد العقود الذكية ذات الأهمية لـ subgraph ، ويحدد الأحداث في تلك العقود التي يجب الانتباه إليها ، وكيفية تعيين بيانات الحدث إلى البيانات التي سيخزنها The Graph في قاعدة البيانات الخاصة به. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +بمجرد كتابة `subgraph manifest` ، يمكنك استخدام Graph CLI لتخزين التعريف في IPFS وإخبار المفهرس ببدء فهرسة البيانات لذلك الـ subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +يقدم هذا الرسم البياني مزيدًا من التفاصيل حول تدفق البيانات عند نشر الـsubgraph manifest ، التعامل مع إجراءات الـ Ethereum: ![](/img/graph-dataflow.png) -The flow follows these steps: +تدفق البيانات يتبع الخطوات التالية: -1. A decentralized application adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. التطبيق اللامركزي يضيف البيانات إلى الـ Ethereum من خلال إجراء على العقد الذكي. +2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء. +3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك. +4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum. +5. التطبيق اللامركزي يستعلم عن الـ Graph Node للبيانات المفهرسة من الـ blockchain ، باستخدام node's [ GraphQL endpoint](https://graphql.org/learn/). يقوم الـ The Graph Node بدوره بترجمة استعلامات الـ GraphQL إلى استعلامات مخزن البيانات الأساسي الخاص به من أجل جلب هذه البيانات ، والاستفادة من إمكانات فهرسة المخزن. التطبيق اللامركزي يعرض تلك البيانات في واجهة مستخدم ، والتي يمكن للمستخدمين من خلالها إصدار إجراءات جديدة على Ethereum. والدورة تتكرر. -## Next Steps +## الخطوات التالية -In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +في الأقسام التالية سوف نخوض في المزيد من التفاصيل حول كيفية تعريف الـ subgraphs ، وكيفية نشرها ،وكيفية الاستعلام عن البيانات من الفهارس التي يبنيها الـ Graph Node. -Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +قبل أن تبدأ في كتابة الـ subgraph الخاص بك ، قد ترغب في إلقاء نظرة على The Graph Explorer واستكشاف بعض الـ subgraphs التي تم نشرها. تحتوي الصفحة الخاصة بكل subgraph على playground والذي يتيح لك الاستعلام عن بيانات الـ subgraph باستخدام GraphQL. From 4c01b9bc928414bbf79d494bcf340d6dd95ee616 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:37 -0500 Subject: [PATCH 132/241] New translations assemblyscript-api.mdx (Spanish) --- pages/es/developer/assemblyscript-api.mdx | 398 +++++++++++----------- 1 file changed, 199 insertions(+), 199 deletions(-) diff --git a/pages/es/developer/assemblyscript-api.mdx b/pages/es/developer/assemblyscript-api.mdx index 2afa431fe8c5..889990ea5f63 100644 --- a/pages/es/developer/assemblyscript-api.mdx +++ b/pages/es/developer/assemblyscript-api.mdx @@ -2,60 +2,60 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) +> Nota: ten en cuenta que si creaste un subgrafo usando el `graph-cli`/`graph-ts` en su versión `0.22.0`, debes saber que estás utilizando una versión antigua del AssemblyScript y te recomendamos mirar la [`guía para migrar`](/developer/assemblyscript-migration-guide) tu código -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Está página explica que APIs usar para recibir ciertos datos de los subgrafos. Dos tipos de estas APIs se describen a continuación: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- La [librería de Graph TypeScript](https://github.com/graphprotocol/graph-ts) (`graph-ts`) y +- el generador de códigos provenientes de los archivos del subgrafo, `graph codegen`. -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +También es posible añadir otras librerías, siempre y cuando sean compatible con [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Debido a que ese lenguaje de mapeo es el que usamos, la [wiki de AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) es una fuente muy completa para las características de este lenguaje y contiene una librería estándar que te puede resultar útil. -## Installation +## Instalación -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: +Los subgrafos creados con [`graph init`](/developer/create-subgraph-hosted) vienen configurados previamente. Todo lo necesario para instalar estás configuraciones lo podrás encontrar en uno de los siguientes comandos: ```sh yarn install # Yarn npm install # NPM ``` -If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: +Si el subgrafo fue creado con scratch, uno de los siguientes dos comandos podrá instalar la librería TypeScript como una dependencia: ```sh yarn add --dev @graphprotocol/graph-ts # Yarn npm install --save-dev @graphprotocol/graph-ts # NPM ``` -## API Reference +## Referencias de API -The `@graphprotocol/graph-ts` library provides the following APIs: +La librería de `@graphprotocol/graph-ts` proporciona las siguientes APIs: -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and the Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. -- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. +- Una API de `ethereum` para trabajar con contratos inteligentes de Ethereum, eventos, bloques, transacciones y valores de Ethereum. +- Un `almacenamiento` para cargar y guardar entidades en Graph Node. +- Una API de `registro` para registrar los mensajes output de The Graph y el Graph Explorer. +- Una API para `ipfs` que permite cargar archivos provenientes de IPFS. +- Una API de `json` para analizar datos en formato JSON. +- Una API para `crypto` que permite usar funciones criptográficas. +- Niveles bajos que permiten traducir entre los distintos sistemas, tales como, Ethereum, JSON, GraphQL y AssemblyScript. -### Versions +### Versiones -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. +La `apiVersion` en el manifiesto del subgrafo especifica la versión de la API correspondiente al mapeo que está siendo ejecutado en el Graph Node de un subgrafo en específico. La versión actual para la APÍ de mapeo es la 0.0.6. -| Version | Release notes | -|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Notas del lanzamiento | +|:-------:| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Se agregó la casilla `nonce` a las Transacciones de Ethereum, se
añadió `baseFeePerGas` para los bloques de Ethereum | +| 0.0.5 | Se actualizó la versión del AssemblyScript a la v0.19.10 (esta incluye cambios importantes, recomendamos leer la [`guía de migración`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` actualizada a `ethereum.transaction.gasLimit` | +| 0.0.4 | Añadido la casilla de `functionSignature` para la función de Ethereum SmartContractCall | +| 0.0.3 | Añadida la casilla `from` para la función de Ethereum Call
`ethereum.call.address` actualizada a `ethereum.call.to` | +| 0.0.2 | Añadida la casilla de `input` para la función de Ethereum Transaction | ### Built-in Types -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). +La documentación sobre las actualizaciones integradas en AssemblyScript puedes encontrarla en la [wiki de AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki/Types). -The following additional types are provided by `@graphprotocol/graph-ts`. +Las siguientes integraciones son proporcionada por `@graphprotocol/graph-ts`. #### ByteArray @@ -63,24 +63,24 @@ The following additional types are provided by `@graphprotocol/graph-ts`. import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray` represents an array of `u8`. +`ByteArray` representa una matriz de `u8`. -_Construction_ +_Construcción_ -- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromI32(x: i32): ByteArray` - Descompuesta en `x` bytes. +- `fromHexString(hex: string): ByteArray` - La longitud de la entrada debe ser uniforme. Prefijo `0x` es opcional. -_Type conversions_ +_Tipo de conversiones_ -- `toHexString(): string` - Converts to a hex string prefixed with `0x`. -- `toString(): string` - Interprets the bytes as a UTF-8 string. -- `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- `toHexString(): string` - Convierte un prefijo hexadecimal iniciado con `0x`. +- `toString(): string` - Interpreta los bytes en una cadena UTF-8. +- `toBase58(): string` - Codifica los bytes en una cadena base58. +- `toU32(): u32` - Interpeta los bytes en base a little-endian `u32`. Se ejecuta en casos de un overflow. +- `toI32(): i32` - Interpreta los bytes en base a little-endian `i32`. Se ejecuta en casos de un overflow. -_Operators_ +_Operadores_ -- `equals(y: ByteArray): bool` – can be written as `x == y`. +- `equals(y: ByteArray): bool` – se puede escribir como `x == y`. #### BigDecimal @@ -88,30 +88,30 @@ _Operators_ import { BigDecimal } from '@graphprotocol/graph-ts' ``` -`BigDecimal` is used to represent arbitrary precision decimals. +`BigDecimal` se usa para representar una precisión decimal arbitraria. -_Construction_ +_Construcción_ -- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. -- `static fromString(s: string): BigDecimal` – parses from a decimal string. +- `constructor(bigInt: BigInt)` – creará un `BigDecimal` en base a un`BigInt`. +- `static fromString(s: string): BigDecimal` – analizará una cadena de decimales. -_Type conversions_ +_Tipo de conversiones_ -- `toString(): string` – prints to a decimal string. +- `toString(): string` – colocará una cadena de decimales. -_Math_ +_Matemática_ -- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. -- `le(y: BigDecimal): bool` – can be written as `x <= y`. -- `gt(y: BigDecimal): bool` – can be written as `x > y`. -- `ge(y: BigDecimal): bool` – can be written as `x >= y`. -- `neg(): BigDecimal` - can be written as `-x`. +- `plus(y: BigDecimal): BigDecimal` – puede escribirse como `x + y`. +- `minus(y: BigDecimal): BigDecimal` – puede escribirse como `x - y`. +- `times(y: BigDecimal): BigDecimal` – puede escribirse como `x * y`. +- `div(y: BigDecimal): BigDecimal` – puede escribirse como `x / y`. +- `equals(y: BigDecimal): bool` – puede escribirse como `x == y`. +- `notEqual(y: BigDecimal): bool` – puede escribirse como `x != y`. +- `lt(y: BigDecimal): bool` – puede escribirse como `x < y`. +- `lt(y: BigDecimal): bool` – puede escribirse como `x < y`. +- `gt(y: BigDecimal): bool` – puede escribirse como `x > y`. +- `ge(y: BigDecimal): bool` – puede escribirse como `x >= y`. +- `neg(): BigDecimal` - puede escribirse como `-x`. #### BigInt @@ -119,47 +119,47 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. - -The `BigInt` class has the following API: - -_Construction_ - -- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. - - _Type conversions_ - -- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. -- `x.toString(): string` – turns `BigInt` into a decimal number string. -- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. -- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. - -_Math_ - -- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. -- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. -- `x.times(y: BigInt): BigInt` – can be written as `x * y`. -- `x.div(y: BigInt): BigInt` – can be written as `x / y`. -- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. -- `x.equals(y: BigInt): bool` – can be written as `x == y`. -- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. -- `x.lt(y: BigInt): bool` – can be written as `x < y`. -- `x.le(y: BigInt): bool` – can be written as `x <= y`. -- `x.gt(y: BigInt): bool` – can be written as `x > y`. -- `x.ge(y: BigInt): bool` – can be written as `x >= y`. -- `x.neg(): BigInt` – can be written as `-x`. -- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. -- `x.isZero(): bool` – Convenience for checking if the number is zero. -- `x.isI32(): bool` – Check if the number fits in an `i32`. -- `x.abs(): BigInt` – Absolute value. -- `x.pow(exp: u8): BigInt` – Exponentiation. -- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. -- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. -- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. -- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. +`BigInt` es usado para representar nuevos enteros grandes. Esto incluye valores de Ethereum similares a `uint32` hacia `uint256` y `int64` hacia `int256`. Todo por debajo de `uint32`. como el `int32`, `uint24` o `int8` se representa como `i32`. + +La clase `BigInt` tiene la siguiente API: + +_Construcción_ + +- `BigInt.fromI32(x: i32): BigInt` – creará un `BigInt` en base a un `i32`. +- `BigInt.fromString(s: string): BigInt`– Analizará un `BigInt` dentro de una cadena. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interpretará `bytes` sin firmar, o un little-endian entero. Si tu entrada es big-endian, deberás llamar primero el código `.reverse()`. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – interpretará los `bytes` como una firma, en un little-endian entero. Si tu entrada es big-endian, deberás llamar primero el código `.reverse()`. + + _Tipo de conversiones_ + +- `x.toHex(): string` - se transforma `BigInt` en un string de caracteres hexadecimales. +- `x.toString(): string` – se transforma `BigInt` en un string de numero decimal. +- `x.toI32(): i32` – retorna el `BigInt` como una `i32`; falla si el valor no encaja en `i32`. Es una buena idea comprobar primero `x.isI32()`. +- `x.toBigDecimal(): BigDecimal` - se convierte en un decimal sin parte fraccionaria. + +_Matemática_ + +- `x.plus(y: BigInt): BigInt` – puede ser escrito como `x + y`. +- `x.minus(y: BigInt): BigInt` – puede ser escrito como `x - y`. +- `x.times(y: BigInt): BigInt` – puede ser escrito como `x * y`. +- `x.div(y: BigInt): BigInt` – puede ser escrito como `x / y`. +- `x.mod(y: BigInt): BigInt` – puede ser escrito como `x % y`. +- `x.equals(y: BigInt): bool` – puede ser escrito como `x == y`. +- `x.notEqual(y: BigInt): bool` – puede ser escrito como `x != y`. +- `x.lt(y: BigInt): bool` – puede ser escrito como `x < y`. +- `x.le(y: BigInt): bool` – puede ser escrito como `x <= y`. +- `x.gt(y: BigInt): bool` – puede ser escrito como `x > y`. +- `x.ge(y: BigInt): bool` – puede ser escrito como `x >= y`. +- `x.neg(): BigInt` – puede ser escrito como `-x`. +- `x.divDecimal(y: BigDecimal): BigDecimal` – divide por un decimal, dando un resultado decimal. +- `x.isZero(): bool` – Conveniencia para comprobar si el número es cero. +- `x.isI32(): bool` – Comprueba si el número encaja en un `i32`. +- `x.abs(): BigInt` –Valor absoluto. +- `x.pow(exp: u8): BigInt` – Exponenciación. +- `bitOr(x: BigInt, y: BigInt): BigInt` puede ser escrito como `x | y`. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – puede ser escrito como `x & y`. +- `leftShift(x: BigInt, bits: u8): BigInt` – puede ser escrito como `x << y`. +- `rightShift(x: BigInt, bits: u8): BigInt` – puede ser escrito como `x >> y`. #### TypedMap @@ -167,15 +167,15 @@ _Math_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap` puede utilizarse para almacenar pares clave-valor. Mira [este ejemplo](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). -The `TypedMap` class has the following API: +La `TypedMap` clase tiene la siguiente API: -- `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` -- `map.set(key: K, value: V): void` – sets the value of `key` to `value` -- `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map -- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map -- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not +- `new TypedMap()` – crea un mapa vacio con claves del tipo `K` y valores del tipo `T` +- `map.set(key: K, value: V): void` – establece el valor del `key` a `value` +- `map.getEntry(key: K): TypedMapEntry | null` – devuelve el par clave-valor de un `key` o `null` si el `key` no existe en el mapa +- `map.get(key: K): V | null` – returna el valor de una `key` o `null` si el `key` no existen en el mapa +- `map.isSet(key: K): bool` – returna `true` si el `key` existe en el mapa y `false` no es asi #### Bytes @@ -183,13 +183,13 @@ The `TypedMap` class has the following API: import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. +`Bytes` se utiliza para representar matrices de bytes de longitud arbitraria. Esto incluye los valores de Ethereum de tipo `bytes`, `bytes32` etc. -The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: +La clase `Bytes` extiende AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) y esto soporta todas las `Uint8Array` funcionalidades, mas los siguientes nuevos metodos: -- `b.toHex()` – returns a hexadecimal string representing the bytes in the array -- `b.toString()` – converts the bytes in the array to a string of unicode characters -- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) +- `b.toHex()` - devuelve un string hexadecimal que representa los bytes de la matriz +- `b.toString()` – convierte los bytes de la matriz en un string de caracteres unicode +- `b.toBase58()` –convierte un valor de Ethereum Bytes en codificación base58 (utilizada para los hashes IPFS) #### Address @@ -197,11 +197,11 @@ The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/Assem import { Address } from '@graphprotocol/graph-ts' ``` -`Address` extends `Bytes` to represent Ethereum `address` values. +`Address` extiende `Bytes` para representar valores de Ethereum `address`. -It adds the following method on top of the `Bytes` API: +Agrega el siguiente método sobre la API `Bytes`: -- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string +- `Address.fromString(s: string): Address` – crea un `Address` desde un string hexadecimal ### Store API @@ -209,13 +209,13 @@ It adds the following method on top of the `Bytes` API: import { store } from '@graphprotocol/graph-ts' ``` -The `store` API allows to load, save and remove entities from and to the Graph Node store. +La API `store` permite cargar, guardar y eliminar entidades desde y hacia el almacén de Graph Node. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Las entidades escritas en el almacén se asignan uno a uno con los tipos `@entity` definidos en el esquema GraphQL del subgrafo. Para hacer que el trabajo con estas entidades sea conveniente, el comando `graph codegen` provisto por el [Graph CLI](https://github.com/graphprotocol/graph-cli) genera clases de entidades, que son subclases del tipo construido `Entity`, con captadores y seteadores de propiedades para los campos del esquema, así como métodos para cargar y guardar estas entidades. -#### Creating entities +#### Creacion de entidades -The following is a common pattern for creating entities from Ethereum events. +El siguiente es un patrón común para crear entidades a partir de eventos de Ethereum. ```typescript // Import the Transfer event class generated from the ERC20 ABI @@ -241,13 +241,13 @@ export function handleTransfer(event: TransferEvent): void { } ``` -When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. +Cuando un evento `Transfer` es encontrado mientras se procesa la cadena, es pasado al evento handler `handleTransfer` usando el tipo generado `Transfer` (con el alias de `TransferEvent` aquí para evitar un conflicto de nombres con el tipo de entidad). Este tipo permite acceder a datos como la transacción parent del evento y sus parámetros. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Cada entidad debe tener un ID único para evitar colisiones con otras entidades. Es bastante común que los parámetros de los eventos incluyan un identificador único que pueda ser utilizado. Nota: El uso del hash de la transacción como ID asume que ningún otro evento en la misma transacción crea entidades con este hash como ID. -#### Loading entities from the store +#### Carga de entidades desde el almacén -If an entity already exists, it can be loaded from the store with the following: +Si una entidad ya existe, se puede cargar desde el almacén con lo siguiente: ```typescript let id = event.transaction.hash.toHex() // or however the ID is constructed @@ -259,18 +259,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +Como la entidad puede no existir todavía en el almacén, el `load` metodo returna al valor del tipo `Transfer | null`. Por lo tanto, puede ser necesario comprobar el caso `null` antes de utilizar el valor. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> **Nota:** La carga de entidades sólo es necesaria si los cambios realizados en la asignación dependen de los datos anteriores de una entidad. Mira en la siguiente sección las dos formas de actualizar las entidades existentes. -#### Updating existing entities +#### Actualización de las entidades existentes -There are two ways to update an existing entity: +Hay dos maneras de actualizar una entidad existente: -1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +1. Cargar la entidad con, por ejemplo `Transfer.load(id)`, establecer propiedades en la entidad, entonces `.save()` de nuevo en el almacen. +2. Simplemente crear una entidad con, por ejemplo `new Transfer(id)`, establecer las propiedades en la entidad, luego `.save()` en el almacen. Si la entidad ya existe, los cambios se fusionan con ella. -Changing properties is straight forward in most cases, thanks to the generated property setters: +Cambiar las propiedades es sencillo en la mayoría de los casos, gracias a los seteadores de propiedades generados: ```typescript let transfer = new Transfer(id) @@ -279,16 +279,16 @@ transfer.to = ... transfer.amount = ... ``` -It is also possible to unset properties with one of the following two instructions: +También es posible desajustar las propiedades con una de las dos instrucciones siguientes: ```typescript transfer.from.unset() transfer.from = null ``` -This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. +Esto sólo funciona con propiedades opcionales, es decir, propiedades que se declaran sin un `!` en GraphQL. Dos ejemplos serian `owner: Bytes` o `amount: BigInt`. -Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. +La actualización de las propiedades de la matriz es un poco más complicada, ya que al obtener una matriz de una entidad se crea una copia de esa matriz. Esto significa que las propiedades de la matriz tienen que ser establecidas de nuevo explícitamente después de cambiar la matriz. El siguiente asume `entity` tiene un `numbers: [BigInt!]!` campo. ```typescript // This won't work @@ -302,9 +302,9 @@ entity.numbers = numbers entity.save() ``` -#### Removing entities from the store +#### Eliminar entidades del almacen -There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: +Actualmente no hay forma de remover una entidad a través de los tipos generados. En cambio, para remover una entidad es necesario pasar el nombre del tipo de entidad y el ID de la misma a `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' @@ -313,17 +313,17 @@ let id = event.transaction.hash.toHex() store.remove('Transfer', id) ``` -### Ethereum API +### API de Ethereum -The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. +La API de Ethereum proporciona acceso a los contratos inteligentes, a las variables de estado públicas, a las funciones de los contratos, a los eventos, a las transacciones, a los bloques y a la codificación/decodificación de los datos de Ethereum. -#### Support for Ethereum Types +#### Compatibilidad con los tipos de Ethereum -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +Al igual que con las entidades, `graph codegen` genera clases para todos los contratos inteligentes y eventos utilizados en un subgrafo. Para ello, los ABIs del contrato deben formar parte de la fuente de datos en el manifiesto del subgrafo. Normalmente, los archivos ABI se almacenan en una carpeta `abis/`. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +Con las clases generadas, las conversiones entre los tipos de Ethereum y los [built-in types](#built-in-types) tienen lugar detras de escena para que los autores de los subgrafos no tengan que preocuparse por ellos. -The following example illustrates this. Given a subgraph schema like +El siguiente ejemplo lo ilustra. Dado un esquema de subgrafos como ```graphql type Transfer @entity { @@ -333,7 +333,7 @@ type Transfer @entity { } ``` -and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: +y un `Transfer(address,address,uint256)` evento firmado en Ethereum, los valores `from`, `to` y `amount` del tipo `address`, `address` y `uint256` se convierten en `Address` y `BigInt`, permitiendo que se transmitan al `Bytes!` y `BigInt!` las propiedades de la `Transfer` entidad: ```typescript let id = event.transaction.hash.toHex() @@ -344,9 +344,9 @@ transfer.amount = event.params.amount transfer.save() ``` -#### Events and Block/Transaction Data +#### Eventos y datos de bloques/transacciones -Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): +Los eventos de Ethereum pasados a los manejadores de eventos, como el evento `Transfer` de los ejemplos anteriores, no sólo proporcionan acceso a los parámetros del evento, sino también a su transacción parent y al bloque del que forman parte. Los siguientes datos pueden ser obtenidos desde las instancias de `event` (estas clases forman parte del módulo `ethereum` en `graph-ts`): ```typescript class Event { @@ -390,11 +390,11 @@ class Transaction { } ``` -#### Access to Smart Contract State +#### Acceso al Estado del Contrato Inteligente -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +El código generado por `graph codegen` también incluye clases para los contratos inteligentes utilizados en el subgrafo. Se pueden utilizar para acceder a variables de estado públicas y llamar a funciones del contrato en el bloque actual. -A common pattern is to access the contract from which an event originates. This is achieved with the following code: +Un patrón común es acceder al contrato desde el que se origina un evento. Esto se consigue con el siguiente código: ```typescript // Import the generated contract class @@ -411,13 +411,13 @@ export function handleTransfer(event: Transfer) { } ``` -As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. +Mientras el `ERC20Contract` en Ethereum tenga una función pública de sólo lectura llamada `symbol`, se puede llamar con `.symbol()`. Para las variables de estado públicas se crea automáticamente un método con el mismo nombre. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Cualquier otro contrato que forme parte del subgrafo puede ser importado desde el código generado y puede ser vinculado a una dirección válida. -#### Handling Reverted Calls +#### Tratamiento de las llamadas revertidas -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +Si los métodos de sólo lectura de tu contrato pueden revertirse, entonces debes manejar eso llamando al método del contrato generado prefijado con `try_`. Por ejemplo, el contrato Gravity expone el método `gravatarToOwner`. Este código sería capaz de manejar una reversión en ese método: ```typescript let gravity = Gravity.bind(event.address) @@ -429,11 +429,11 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +Ten en cuenta que un nodo Graph conectado a un cliente Geth o Infura puede no detectar todas las reversiones, si confías en esto te recomendamos que utilices un nodo Graph conectado a un cliente Parity. -#### Encoding/Decoding ABI +#### Codificación/Descodificación ABI -Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. +Los datos pueden codificarse y descodificarse de acuerdo con el formato de codificación ABI de Ethereum utilizando las funciones `encode` y `decode` en el modulo `ethereum`. ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -450,39 +450,39 @@ let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` -For more information: +Para mas informacion: - [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) - Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) - More [complex example](https://github.com/graphprotocol/graph-node/blob/6a7806cc465949ebb9e5b8269eeb763857797efc/tests/integration-tests/host-exports/src/mapping.ts#L72). -### Logging API +### API de Registro ```typescript import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +La API `log` permite a los subgrafos registrar información en la salida estándar del Graph Node así como en Graph Explorer. Los mensajes pueden ser registrados utilizando diferentes niveles de registro. Se proporciona una sintaxis de string de formato básico para componer los mensajes de registro a partir del argumento. -The `log` API includes the following functions: +La API `log` incluye las siguientes funciones: -- `log.debug(fmt: string, args: Array): void` - logs a debug message. -- `log.info(fmt: string, args: Array): void` - logs an informational message. -- `log.warning(fmt: string, args: Array): void` - logs a warning. -- `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.debug(fmt: string, args: Array): void` - registra un mensaje de depuración. +- `log.info(fmt: string, args: Array): void` - registra un mensaje informativo. +- `log.warning(fmt: string, args: Array): void` - registra una advertencia. +- `log.error(fmt: string, args: Array): void` - registra un error de mensaje. +- `log.critical(fmt: string, args: Array): void` – registra un mensaje critico _y_ termina el subgrafo. -The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. +La API `log` toma un formato string y una matriz de valores de string. A continuación, sustituye los marcadores de posición por los valores de string de la matriz. El primer `{}` marcador de posición se sustituye por el primer valor de la matriz, el segundo marcador de posición `{}` se sustituye por el segundo valor y así sucesivamente. ```typescript log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) ``` -#### Logging one or more values +#### Registro de uno o varios valores -##### Logging a single value +##### Registro de un valor -In the example below, the string value "A" is passed into an array to become`['A']` before being logged: +En el siguiente ejemplo, el valor del string "A" se pasa a una matriz para convertirse en`['A']` antes de ser registrado: ```typescript let myValue = 'A' @@ -493,9 +493,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Logging a single entry from an existing array +##### Registro de una sola entrada de una matriz existente -In the example below, only the first value of the argument array is logged, despite the array containing three values. +En el ejemplo siguiente, sólo se registra el primer valor de la matriz de argumentos, a pesar de que la matriz contiene tres valores. ```typescript let myArray = ['A', 'B', 'C'] @@ -506,9 +506,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -#### Logging multiple entries from an existing array +#### Registro de múltiples entradas de una matriz existente -Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. +Cada entrada de la matriz de argumentos requiere su propio marcador de posición `{}` en el string del mensaje de registro. El siguiente ejemplo contiene tres marcadores de posición `{}` en el mensaje de registro. Debido a esto, los tres valores de `myArray` se registran. ```typescript let myArray = ['A', 'B', 'C'] @@ -519,9 +519,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Logging a specific entry from an existing array +##### Registro de una entrada específica de una matriz existente -To display a specific value in the array, the indexed value must be provided. +Para mostrar un valor específico en la matriz, se debe proporcionar el valor indexado. ```typescript export function handleSomeEvent(event: SomeEvent): void { @@ -530,9 +530,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Logging event information +##### Registro de información de eventos -The example below logs the block number, block hash and transaction hash from an event: +El ejemplo siguiente registra el número de bloque, el hash de bloque y el hash de transacción de un evento: ```typescript import { log } from '@graphprotocol/graph-ts' @@ -546,15 +546,15 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -### IPFS API +### API IPFS ```typescript import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Los contratos inteligentes anclan ocasionalmente archivos IPFS en la cadena. Esto permite que las asignaciones obtengan los hashes de IPFS del contrato y lean los archivos correspondientes de IPFS. Los datos del archivo se devolverán en forma de `Bytes`, lo que normalmente requiere un procesamiento posterior, por ejemplo con la API `json` documentada más adelante en esta página. -Given an IPFS hash or path, reading a file from IPFS is done as follows: +Dado un hash o ruta de IPFS, la lectura de un archivo desde IPFS se realiza de la siguiente manera: ```typescript // Put this inside an event handler in the mapping @@ -567,9 +567,9 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. +**Nota:** `ipfs.cat` no es deterministico en este momento. Si no se puede recuperar el archivo a través de la red IPFS antes de que se agote el tiempo de la solicitud, devolverá `null`. Debido a esto, siempre vale la pena comprobar el resultado para `null`. Para asegurar que los archivos puedan ser recuperados, tienen que estar anclados al nodo IPFS al que se conecta Graph Node. En el [servicio de host](https://thegraph.com/hosted-service), esto es [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). Mira la seccion [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) para mayor informacion. -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +También es posible procesar archivos de mayor tamaño en streaming con `ipfs.map`. La función espera el hash o la ruta de un archivo IPFS, el nombre de una llamada de retorno y banderas para modificar su comportamiento: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -599,34 +599,34 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +La única bandera que se admite actualmente es `json`, que debe ser pasada por `ipfs.map`. Con la bandera `json`, el archivo IPFS debe consistir en una serie de valores JSON, un valor por línea. La llamada a `ipfs.map` leerá cada línea del archivo, la deserializará en un `JSONValue` y llamará a la llamada de retorno para cada una de ellas. El callback puede entonces utilizar operaciones de entidad para almacenar los datos del `JSONValue`. Los cambios de entidad se almacenan sólo cuando el manejador que llamó `ipfs.map` termina con éxito; mientras tanto, se mantienen en la memoria, y el tamaño del archivo que `ipfs.map` puede procesar es, por lo tanto, limitado. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +Si es exitoso, `ipfs.map` retorna `void`. Si alguna invocación de la devolución de llamada causa un error, el manejador que invocó `ipfs.map` es abortado, y el subgrafo es marcado como fallido. -### Crypto API +### API Cripto ```typescript import { crypto } from '@graphprotocol/graph-ts' ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: +La API `crypto` pone a disposición de los usuarios funciones criptográficas para su uso en mapeos. En este momento, sólo hay una: - `crypto.keccak256(input: ByteArray): ByteArray` -### JSON API +### API JSON ```typescript import { json, JSONValueKind } from '@graphprotocol/graph-ts' ``` -JSON data can be parsed using the `json` API: +Los datos JSON pueden ser analizados usando la API `json`: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence -- `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromBytes(data: Bytes): JSONValue` – analiza datos JSON desde una matriz `Bytes` +- `json.try_fromBytes(data: Bytes): Result` – version segura de `json.fromBytes`, devuelve una variante de error si el análisis falla +- `json.fromString(data: Bytes): JSONValue` – analiza datos de JSON desde un valido UTF-8 `String` +- `json.try_fromString(data: Bytes): Result` – version segura de `json.fromString`, devuelve una variante de error si el analisis falla -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +La `JSONValue` clase proporciona una forma de extraer valores de un documento JSON arbitrario. Como los valores JSON pueden ser booleans, números, matrices y más, `JSONValue` viene con una propiedad `kind` para comprobar el tipo de un valor: ```typescript let value = json.fromBytes(...) @@ -635,22 +635,22 @@ if (value.kind == JSONValueKind.BOOL) { } ``` -In addition, there is a method to check if the value is `null`: +Además, hay un método para comprobar si el valor es `null`: - `value.isNull(): boolean` -When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: +Cuando el tipo de un valor es cierto, se puede convertir a un [built-in type](#built-in-types) utilizando uno de los siguientes métodos: - `value.toBool(): boolean` - `value.toI64(): i64` - `value.toF64(): f64` - `value.toBigInt(): BigInt` - `value.toString(): string` -- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) +- `value.toArray(): Array` -(y luego convierte `JSONValue` con uno de los 5 metodos anteriores) -### Type Conversions Reference +### Referencias de Tipo de Conversiones -| Source(s) | Destination | Conversion function | +| Origen(es) | Destino | Funcion de Conversion | | -------------------- | -------------------- | ---------------------------- | | Address | Bytes | none | | Address | ID | s.toHexString() | @@ -688,17 +688,17 @@ When the type of a value is certain, it can be converted to a [built-in type](#b | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### Data Source Metadata +### Metadatos de la Fuente de Datos -You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: +Puedes inspeccionar la dirección del contrato, la red y el contexto de la fuente de datos que invocó el manejador a través del namespaces `dataSource`: - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### Entity and DataSourceContext +### Entity y DataSourceContext -The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: +La clase base `Entity` y la clase hija `DataSourceContext` tienen ayudantes para establecer y obtener campos dinámicamente: - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From aa19d1f722da2e23b8dc95ff23cc514c564b14dd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:38 -0500 Subject: [PATCH 133/241] New translations introduction.mdx (Japanese) --- pages/ja/about/introduction.mdx | 48 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/ja/about/introduction.mdx b/pages/ja/about/introduction.mdx index 5f840c040400..2e8e73072b4b 100644 --- a/pages/ja/about/introduction.mdx +++ b/pages/ja/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: Introduction +title: イントロダクション --- -This page will explain what The Graph is and how you can get started. +このページでは、「The Graph」とは何か、どのようにして始めるのかを説明します。 -## What The Graph Is +## The Graph とは -The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. +The Graph は、Ethereum をはじめとするブロックチェーンのデータをインデックス化してクエリするための分散型プロトコルです。 これにより、直接クエリすることが困難のデータのクエリが容易に可能になります。 -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +[Uniswap](https://uniswap.org/)のような複雑なスマートコントラクトを持つプロジェクトや、[Bored Ape Yacht Club](https://boredapeyachtclub.com/) のような NFT の取り組みでは、Ethereum のブロックチェーンにデータを保存しているため、基本的なデータ以外をブロックチェーンから直接読み取ることは実に困難です。 -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +Bored Ape Yacht Club の場合、ある Ape の所有者を取得したり、ID に基づいて Ape のコンテンツ URI を取得したり、総供給量を取得したりといった基本的な読み取り操作は、 [スマートコントラクト](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) に直接プログラムされているので実行できますが、集約、検索、連携、フィルタリングなど、より高度な実世界のクエリや操作はできません。 例えば、あるアドレスが所有している NFT をクエリし、その特徴の 1 つでフィルタリングしたいと思っても、コントラクト自体と直接やりとりしてその情報を得ることはできません。 -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. +このデータを得るためには、これまでに発行されたすべての [`転送`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) イベントを処理し、トークン ID と IPFS ハッシュを使って IPFS からメタデータを読み取り、それを集約する必要があります。 このような比較的簡単な質問であっても、ブラウザ上で動作する分散型アプリケーション(dapp)が回答を得るには**数時間から数日**かかるでしょう。 -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +また、独自のサーバーを構築し、そこでトランザクションを処理してデータベースに保存し、その上にデータを照会するための API エンドポイントを構築することもできます。 しかし、この方法はリソースを必要とし、メンテナンスが必要で、単一障害点となり、分散化に必要な重要なセキュリティ特性を壊してしまいます。 -**Indexing blockchain data is really, really hard.** +**ブロックチェーンデータのインデックス作成は非常に困難です。** -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. +フィナリティ、チェーンの再編成、アンクルドブロックなどのブロックチェーンの特性は、このプロセスをさらに複雑にし、ブロックチェーンデータから正しいクエリ結果を取り出すことは、時間がかかるだけでなく、概念的にも困難です。 -The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +The Graph は、ブロックチェーンデータにインデックスを付けて、パフォーマンスの高い効率的なクエリを可能にする分散型プロトコルでこれを解決します。 そして、これらの API(インデックス化された「サブグラフ」)は、標準的な GraphQL API でクエリを行うことができます。 現在、同じ機能を持つホスト型のサービスと、分散型のプロトコルがあります。 どちらも、オープンソースで実装されている [Graph Node](https://github.com/graphprotocol/graph-node).によって支えられています。 -## How The Graph Works +## The Graph の仕組み -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +The Graph は、サブグラフマニフェストと呼ばれるサブグラフ記述に基づいて、Ethereum のデータに何をどのようにインデックスするかを学習します。 サブグラフマニフェストは、そのサブグラフで注目すべきスマートコントラクト、注目すべきコントラクト内のイベント、イベントデータと The Graph がデータベースに格納するデータとのマッピング方法などを定義します。 -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +`サブグラフのマニフェスト`を書いたら、グラフの CLI を使ってその定義を IPFS に保存し、インデクサーにそのサブグラフのデータのインデックス作成を開始するように指示します。 -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +この図では、サブグラフ・マニフェストがデプロイされた後のデータの流れについて、Ethereum のトランザクションを扱って詳しく説明しています。 ![](/img/graph-dataflow.png) -The flow follows these steps: +フローは以下のステップに従います。 -1. A decentralized application adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. 分散型アプリケーションは、スマートコントラクトのトランザクションを介して Ethereum にデータを追加します。 +2. スマートコントラクトは、トランザクションの処理中に 1 つまたは複数のイベントを発行します。 +3. Graph Node は、Ethereum の新しいブロックと、それに含まれる自分のサブグラフのデータを継続的にスキャンします。 +4. Graph Node は、これらのブロックの中からあなたのサブグラフの Ethereum イベントを見つけ出し、あなたが提供したマッピングハンドラーを実行します。 マッピングとは、イーサリアムのイベントに対応して Graph Node が保存するデータエンティティを作成または更新する WASM モジュールのことです。 +5. 分散型アプリケーションは、ノードの[GraphQL エンドポイント](https://graphql.org/learn/).を使って、ブロックチェーンからインデックスされたデータを Graph Node にクエリします。 Graph Node は、GraphQL のクエリを、基盤となるデータストアに対するクエリに変換し、ストアのインデックス機能を利用してデータを取得します。 分散型アプリケーションは、このデータをエンドユーザー向けのリッチな UI に表示し、エンドユーザーはこれを使って Ethereum 上で新しいトランザクションを発行します。 このサイクルが繰り返されます。 -## Next Steps +## 次のステップ -In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +次のセクションでは、サブグラフを定義する方法、サブグラフをデプロイする方法、Graph Node が構築したインデックスからデータをクエリする方法について、さらに詳しく説明します。 -Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +独自のサブグラフを書き始める前に、グラフエクスプローラを見て、既にデプロイされているサブグラフをいくつか見てみるといいでしょう。 各サブグラフのページには、そのサブグラフのデータを GraphQL でクエリするためのプレイグラウンドが用意されています。 From 6c0bf13992ada6330bbbd3a6924cd955214340b1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:39 -0500 Subject: [PATCH 134/241] New translations introduction.mdx (Korean) --- pages/ko/about/introduction.mdx | 48 ++++++++++++++++----------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/ko/about/introduction.mdx b/pages/ko/about/introduction.mdx index 5f840c040400..f401d4070f1f 100644 --- a/pages/ko/about/introduction.mdx +++ b/pages/ko/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: Introduction +title: 소개 --- -This page will explain what The Graph is and how you can get started. +이 페이지는 더 그래프가 무엇이며, 여러분들이 시작하는 방법에 대해 설명합니다. -## What The Graph Is +## 더 그래프란 무엇인가? -The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. +더 그래프는 이더리움으로부터 시작한 블록체인 데이터를 인덱싱하고 쿼리하기 위한 분산형 프로토콜입니다. 이는 직접 쿼리하기 어려운 데이터 쿼리를 가능하게 해줍니다. -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +[유니스왑](https://uniswap.org/) 처럼 복잡한 스마트 컨트렉트를 구현하는 프로젝트나 [Bored Ape Yacht Club](https://boredapeyachtclub.com/)과 같은 NFT 이니셔티브들은 이더리움 블록체인에 데이터를 저장하기 때문에, 블록체인의 기본 데이터 외에는 직접적으로 읽기가 매우 어렵습니다. -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +Bored Ape Yacht Club의 경우에 우리는 [해당 컨트렉트](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) 에서 특정 유인원의 주인을 확인하거나, 그들의 ID를 기반으로 Ape의 콘텐츠 URI를 확인하거나, 혹은 총 공급량을 확인하는 등의 기본적인 읽기 작업을 수행할 수 있습니다. 이는 이러한 읽기 작업이 스마트 컨트렉트에 직접적으로 프로그래밍 되었기 때문에 가능하지만, 집계, 검색, 관계 및 단순하지 않은 필터링과 같은 더 고급 적인 실생활 쿼리 및 작업은 불가능합니다. 예를 들어 여러분들이 특정 주소가 소유한 유인원을 쿼리하고, 그 특성 중 하나로 필터링하고자 하는 경우, 우리는 해당 컨트렉트 자체와 직접 상호 작용하여 해당 정보를 얻을 수 없습니다. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. +이러한 데이터를 얻기 위해서, 여러분들은 아마 그동안 발생한 모든 단일 [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) 이벤트 들을 모두 처리하고, 토큰 ID와 IPFS 해시를 사용하여 IPFS로부터 메타데이터를 읽은 후 이들을 집계해야 합니다. 이러한 유형의 비교적 간단한 쿼리에 대해서도, 아마 브라우저에서 실행되는 탈중앙화 애필리케이션(dapp)은 답을 얻기 위해 **몇 시간 혹은 며칠**이 걸릴 수도 있습니다. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +또한 여러분들은 데이터를 쿼리하기 위해 자체 서버를 구축하고, 그곳에서 트랜잭션을 처리하고, 데이터베이스에 저장하고, 그 위에 API 엔드포인트를 구축할 수도 있습니다. 하지만 이 옵션은 많은 리소스를 사용하고, 유지 관리가 필요하며, 단일 실패 지점을 제공하고 또한 탈중앙화에 필수적인 중요한 보안 속성을 손상시킵니다. -**Indexing blockchain data is really, really hard.** +**블록체인 데이터를 인덱싱하는 것은 정말로, 정말로 어렵습니다.** -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. +최종성, 체인 재구성 또는 언클 블록과 같은 블록체인 속성들은 이 프로세스를 더욱 복잡하게 만들고, 블록체인 데이터에서 정확한 쿼리 결과가 검색되도록 하기 위해 많은 시간이 소요될 뿐만 아니라 개념적으로도 어렵게 만듭니다. -The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +더 그래프는 블록체인 데이터를 인덱싱하고 효율적이고 효과적인 쿼리를 가능하게 하는 분산형 프로토콜로 이를 해결합니다. 이러한 API(인덱싱된 "서브그래프")들을 표준 GraphQL API로 쿼리할 수 있습니다. 오늘날, 호스팅 서비스와 동일한 기능을 가진 탈중앙화 프로토콜이 존재합니다. 둘 다 [Graph Node](https://github.com/graphprotocol/graph-node)의 오픈소스 구현에 의해 뒷받침 됩니다. -## How The Graph Works +## 더 그래프의 작동 방식 -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +더 그래프는 서브 매니페스트라고 하는 서브그래프 설명을 기반으로 이더리움 데이터를 인덱싱하는 항목과 방법을 학습합니다. 서브그래프 설명은 서브그래프에 대한 스마트 컨트렉트, 주의를 기울여야 할 컨트렉트들의 이벤트 및 더 그래프가 데이터베이스에 저장할 데이터에 이벤트 데이터를 매핑하는 방법을 정의합니다. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +여러분들이 `subgraph manifest`를 작성한 후에 , Graph CLI를 사용하여 IPFS에 정의를 저장하고 인덱서에게 해당 서브그래프에 대한 데이터 인덱싱을 시작하도록 지시합니다. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +이 다이어그램은 이더리움 트랜잭션을 처리하는 서브그래프 매니페스트가 배포된 후 데이터 흐름에 대한 자세한 정보를 제공합니다. ![](/img/graph-dataflow.png) -The flow follows these steps: +해당 플로우는 다음 단계를 따릅니다 : -1. A decentralized application adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. 탈중앙화 애플리케이션은 스마트 컨트렉트의 트랜잭션을 통해 이더리움에 데이터를 추가합니다. +2. 스마트 컨트렉트는 트랜잭션을 처리하는 동안 하나 이상의 이벤트를 발생시킵니다. +3. 그래프 노드는 이더리움에서 새 블록들과 해당 블록들에 포함될 수 있는 서브그래프 데이터를 지속적으로 검색합니다. +4. 그래프 노드는 이러한 블록에서 서브그래프에 대한 이더리움 이벤트를 찾고 사용자가 제공한 매핑 핸들러를 실행합니다. 매핑은 이더리움 이벤트들에 대응해 그래프 노드가 저장하는 데이터 엔티티들을 생성하거나 업데이트하는 WASM 모듈입니다. +5. 탈중앙화 애플리케이션은 노드의 [GraphQL endpoint](https://graphql.org/learn/)를 사용하여 블록체인에서 인덱싱된 데이터를 위해 그래프 노드를 쿼리합니다. 더 그래프 노드는 GraphQL 쿼리를 기본 데이터 저장소에 대한 쿼리로 변환하여 이 데이터를 가져오고 저장소의 인덱싱 기능들을 활용합니다. 분산형 애플리케이션은 최종 사용자를 위해 이더리움에서 새로운 트랜잭션을 발생시킬 때 사용하는 풍부한 UI로 이 데이터를 표시합니다. 이 싸이클이 반복됩니다. -## Next Steps +## 다음 단계 -In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +다음 섹션에서 우리는 서브그래프를 정의하는 방법, 배포하는 방법 및 그래프 노드가 구축하는 인덱스들로부터 데이터를 쿼리하는 방법에 대해 더 자세히 알아볼 것입니다. -Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +자체 서브그래프를 작성하기 전에, 여러분들은 그래프 탐색기를 살펴보고 이미 배포된 일부 서브 그래프들에 대해 알아보길 희망하실 수 있습니다. 각 서브 그래프 페이지에는 여러분들이 GraphQL로 서브그래프의 데이터를 쿼리할 수 있는 영역이 포함되어 있습니다. From 0ea5389b6ffc2dc2d8adc73b263e0b1e17f1f0ff Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:40 -0500 Subject: [PATCH 135/241] New translations introduction.mdx (Chinese Simplified) --- pages/zh/about/introduction.mdx | 46 ++++++++++++++++----------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/pages/zh/about/introduction.mdx b/pages/zh/about/introduction.mdx index 5f840c040400..a0f852188001 100644 --- a/pages/zh/about/introduction.mdx +++ b/pages/zh/about/introduction.mdx @@ -1,47 +1,47 @@ --- -title: Introduction +title: 介绍 --- -This page will explain what The Graph is and how you can get started. +本页将解释什么是 The Graph,以及你如何开始。 -## What The Graph Is +## 什么是 The Graph -The Graph is a decentralized protocol for indexing and querying data from blockchains, starting with Ethereum. It makes it possible to query data that is difficult to query directly. +The Graph 是一个去中心化的协议,用于索引和查询区块链的数据,首先是从以太坊开始的。 它使查询那些难以直接查询的数据成为可能。 -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +像 [Uniswap](https://uniswap.org/)这样具有复杂智能合约的项目,以及像 [Bored Ape Yacht Club](https://boredapeyachtclub.com/) 这样的 NFTs 倡议,都在以太坊区块链上存储数据,因此,除了直接从区块链上读取基本数据外,真的很难。 -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply, as these read operations are programmed directly into the smart contract, but more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are not possible. For example, if we wanted to query for apes that are owned by a certain address, and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +在 Bored Ape Yacht Club 的案例中,我们可以对 [合约](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code)进行基本的读取操作,比如获得某个 Ape 的所有者,根据他们的 ID 获得某个 Ape 的内容 URI,或者总供应量,因为这些读取操作是直接编入智能合约的,但是更高级的现实世界的查询和操作,比如聚合、搜索、关系和非粗略的过滤是不可能的。 例如,如果我们想查询某个地址所拥有的 apes,并通过它的某个特征进行过滤,我们将无法通过直接与合约本身进行交互来获得该信息。 -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. Even for these types of relatively simple questions, it would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer. +为了获得这些数据,你必须处理曾经发出的每一个 [`传输`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) 事件,使用 Token ID 和 IPFS 的哈希值从 IPFS 读取元数据,然后将其汇总。 即使是这些类型的相对简单的问题,在浏览器中运行的去中心化应用程序(dapp)也需要**几个小时甚至几天** 才能得到答案。 -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is resource intensive, needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +你也可以建立你自己的服务器,在那里处理交易,把它们保存到数据库,并在上面建立一个 API 终端,以便查询数据。 然而,这种选择是资源密集型的,需要维护,会出现单点故障,并破坏了去中心化化所需的重要安全属性。 -**Indexing blockchain data is really, really hard.** +**为区块链数据编制索引真的非常非常难。** -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further, and make it not just time consuming but conceptually hard to retrieve correct query results from blockchain data. +区块链的属性,如最终性、链重组或未封闭的区块,使这一过程进一步复杂化,并使从区块链数据中检索出正确的查询结果不仅耗时,而且在概念上也很难。 -The Graph solves this with a decentralized protocol that indexes and enables the performant and efficient querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +The Graph 通过一个去中心化的协议解决了这一问题,该协议可以对区块链数据进行索引并实现高性能和高效率的查询。 这些 API(索引的 "子图")然后可以用标准的 GraphQL API 进行查询。 今天,有一个托管服务,也有一个具有相同功能的分去中心化协议。 两者都由 [](https://github.com/graphprotocol/graph-node)Graph Node -## How The Graph Works +## The Graph 是如何工作的 -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +的开放源码实现支持。 -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +Graph 根据子图描述(称为子图清单)来学习什么以及如何为以太坊数据建立索引。 子图描述定义了子图所关注的智能合约,这些合约中需要关注的事件,以及如何将事件数据映射到 The Graph 将存储在其数据库中的数据。 -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +一旦你写好了 `子图清单`,你就可以使用 Graph CLI 将该定义存储在 IPFS 中,并告诉索引人开始为该子图编制索引数据。 ![](/img/graph-dataflow.png) The flow follows these steps: -1. A decentralized application adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The decentralized application queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The decentralized application displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. 一个去中心化的应用程序通过智能合约上的交易向以太坊添加数据。 +2. 智能合约在处理交易时,会发出一个或多个事件。 +3. Graph 节点不断扫描以太坊的新区块和它们可能包含的子图的数据。 +4. Graph 节点在这些区块中为你的子图找到 Ethereum 事件并运行你提供的映射处理程序。 映射是一个 WASM 模块,它创建或更新 Graph Node 存储的数据实体,以响应 Ethereum 事件。 +5. 去中心化的应用程序使用节点的[GraphQL 端点](https://graphql.org/learn/),从区块链的索引中查询 Graph 节点的数据。 Graph 节点反过来将 GraphQL 查询转化为对其底层数据存储的查询,以便利用存储的索引功能来获取这些数据。 去中心化的应用程序在一个丰富的用户界面中为终端用户显示这些数据,他们用这些数据在以太坊上发行新的交易。 就这样周而复始。 -## Next Steps +## 下一步 -In the following sections we will go into more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +流程遵循这些步骤: -Before you start writing your own subgraph, you might want to have a look at the Graph Explorer and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +在下面的章节中,我们将更详细地介绍如何定义子图,如何部署它们,以及如何从 Graph 节点建立的索引中查询数据。 From 03d8e73ab7b9e98697b9d421212aeba4595a0f9b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:41 -0500 Subject: [PATCH 136/241] New translations network.mdx (Spanish) --- pages/es/about/network.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/es/about/network.mdx b/pages/es/about/network.mdx index b19f08d12bc7..a81e6ef93cbb 100644 --- a/pages/es/about/network.mdx +++ b/pages/es/about/network.mdx @@ -1,15 +1,15 @@ --- -title: Network Overview +title: Visión general de la red --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network es un protocolo de indexación descentralizado, el cual permite organizar los datos de la blockchain. Las aplicaciones utilizan GraphQL para consultar APIs públicas, llamadas subgrafos, que sirven para recuperar los datos que están indexados en la red. Con The Graph, los desarrolladores pueden construir sus aplicaciones completamente en una infraestructura pública. > GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## Overview +## Descripción -The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. +The Graph Network está formada por Indexadores, Curadores y Delegadores que proporcionan servicios a la red y proveen datos a las aplicaciones Web3. Los clientes utilizan estas aplicaciones y consumen los datos. -![Token Economics](/img/Network-roles@2x.png) +![Economía de los tokens](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Para garantizar la seguridad económica de The Graph Network y la integridad de los datos que se consultan, los participantes colocan en staking sus Graph Tokens (GRT). GRT es un token alojado en el protocolo ERC-20 de la blockchain Ethereum, utilizado para asignar recursos en la red. Los Indexadores, Curadores y Delegadores pueden prestar sus servicios y obtener ingresos por medio de la red, en proporción a su desempeño y la cantidad de GRT que hayan colocado en staking. From 7f63f750902c76df20ee26b9233db4f8f981ee0b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:44 -0500 Subject: [PATCH 137/241] New translations network.mdx (Arabic) --- pages/ar/about/network.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/ar/about/network.mdx b/pages/ar/about/network.mdx index b19f08d12bc7..7b0c538514ce 100644 --- a/pages/ar/about/network.mdx +++ b/pages/ar/about/network.mdx @@ -1,15 +1,15 @@ --- -title: Network Overview +title: نظرة عامة حول الشبكة --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +شبكة The Graph هو بروتوكول فهرسة لامركزي لتنظيم بيانات الـ blockchain. التطبيقات تستخدم GraphQL للاستعلام عن APIs المفتوحة والتي تسمى subgraphs ، لجلب البيانات المفهرسة على الشبكة. باستخدام The Graph ، يمكن للمطورين إنشاء تطبيقات بدون خادم تعمل بالكامل على البنية الأساسية العامة. -> GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +> عنوان GRT Token: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## Overview +## نظره عامة -The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. +شبكة TheGraph تتكون من مفهرسين (Indexers) ومنسقين (Curators) ومفوضين (Delegator) حيث يقدمون خدمات للشبكة ويقدمون البيانات لتطبيقات Web3. حيث يتم استخدام تلك التطبيقات والبيانات من قبل المستهلكين. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +لضمان الأمن الاقتصادي لشبكة The Graph وسلامة البيانات التي يتم الاستعلام عنها ، يقوم المشاركون بـ stake لـ Graph Tokens (GRT). GRT رمزه ERC-20 على Ethereum blockchain ، يستخدم لمحاصصة (allocate) الموارد في الشبكة. المفوضون والمنسقون والمفهرسون النشطون يقدمون الخدمات لذلك يمكنهم الحصول على عوائد من الشبكة ، بما يتناسب مع حجم العمل الذي يؤدونه وحصة GRT الخاصة بهم. From 808a99dd7399129abd60b0db8a95d74fe073f6d4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:45 -0500 Subject: [PATCH 138/241] New translations network.mdx (Japanese) --- pages/ja/about/network.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/pages/ja/about/network.mdx b/pages/ja/about/network.mdx index b19f08d12bc7..83f01727e162 100644 --- a/pages/ja/about/network.mdx +++ b/pages/ja/about/network.mdx @@ -1,15 +1,15 @@ --- -title: Network Overview +title: ネットワークの概要 --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +グラフネットワークは、ブロックチェーンデータを整理するための分散型インデックスプロトコルです。 アプリケーションはGraphQLを使ってサブグラフと呼ばれるオープンなAPIにクエリし、ネットワーク上にインデックスされているデータを取得します。 The Graphを使うことで、開発者は公共のインフラ上で実行されるサーバーレスアプリケーションを構築することができます。 > GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## Overview +## 概要 -The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. +グラフネットワークは、インデクサー、キュレーター、デリゲーターにより構成され、ネットワークにサービスを提供し、Web3アプリケーションにデータを提供します。 消費者は、アプリケーションを利用し、データを消費します。 -![Token Economics](/img/Network-roles@2x.png) +![トークンエコノミクス](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +グラフネットワークの経済的な安全性と、クエリデータの完全性を確保するために、参加者はグラフトークン(GRT)をステークします。 GRTは、Ethereumブロックチェーン上でERC-20となっているワークトークンで、ネットワーク内のリソースを割り当てるために使用されます。 アクティブなインデクサー、キュレーター、デリゲーターはサービスを提供し、その作業量とGRTのステークに比例して、ネットワークから収入を得ることができます。 From 65fb90531af0496a2168b65c893fdbd0b38cd8ae Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:46 -0500 Subject: [PATCH 139/241] New translations network.mdx (Korean) --- pages/ko/about/network.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/ko/about/network.mdx b/pages/ko/about/network.mdx index b19f08d12bc7..b7a6a6139801 100644 --- a/pages/ko/about/network.mdx +++ b/pages/ko/about/network.mdx @@ -1,15 +1,15 @@ --- -title: Network Overview +title: 네트워크 개요 --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +더 그래프 네트워크는 블록체인 데이터를 구성하기 위한 분산형 인덱싱 프로토콜입니다. 애플리케이션들은 GraphQL을 사용하여 서브그래프라고 하는 개방형 API를 쿼리하여 네트워크에서 인덱싱된 데이터를 검색합니다. 더 그래프를 사용하여 개발자는 완전히 범용 인프라에서 실행되는 서버리스 애플리케이션을 구축할 수 있습니다. -> GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +> GRT 토큰 주소: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## Overview +## 개요 -The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. +더 그래프 네트워크는 네트워크에 서비스를 제공하고 Web3 애플리케이션들에 데이터를 제공하는 인덱서, 큐레이터 및 위임자로 구성됩니다. 소비자는 애플리케이션을 사용하고 데이터를 소비합니다. -![Token Economics](/img/Network-roles@2x.png) +![토큰 이코노믹스](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +더 그래프 네트워크의 경제적 보안과 쿼리 되는 데이터의 무결성을 보장하기 위해 참여자들은 그래프 토큰(GRT)을 스테이킹하고 사용합니다. GRT는 이더리움 블록체인 상의 ERC-20 작업 토큰이며, 네트워크 내의 리소스들을 할당하는 데 사용됩니다. 활성 인덱서, 큐레이터 및 위임자는 수행하는 작업의 양과 GRT 지분에 비례하여 네트워크에 서비스를 제공하고 수익을 창출할 수 있습니다. From fa3f99cf04c491134a7412cc02c6792f4e566c54 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:47 -0500 Subject: [PATCH 140/241] New translations network.mdx (Chinese Simplified) --- pages/zh/about/network.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/pages/zh/about/network.mdx b/pages/zh/about/network.mdx index b19f08d12bc7..7cdb059d6279 100644 --- a/pages/zh/about/network.mdx +++ b/pages/zh/about/network.mdx @@ -1,15 +1,15 @@ --- -title: Network Overview +title: 网络概述 --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs, to retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph网络是一个去中心化的索引协议,用于组织区块链数据。 应用程序使用GraphQL查询称为子图的开放API,以检索网络上的索引数据。 通过The Graph,开发者可以建立完全在公共基础设施上运行的无服务器应用程序。 -> GRT Token Address: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +> Grt合约地址:[0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -## Overview +## 概述 -The Graph Network consists of Indexers, Curators and Delegators that provide services to the network, and serve data to Web3 applications. Consumers use the applications and consume the data. +The Graph网络由索引人、策展人和委托人组成,为网络提供服务,并为Web3应用程序提供数据。 消费者使用应用程序并消费数据。 -![Token Economics](/img/Network-roles@2x.png) +![代币经济学](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens (GRT). GRT is a work token that is an ERC-20 on the Ethereum blockchain, used to allocate resources in the network. Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +为了确保The Graph 网络的经济安全和被查询数据的完整性,参与者将Graph 令牌(GRT)质押并使用。 GRT是一种工作代币,是以太坊区块链上的ERC-20,用于分配网络中的资源。 活跃的索引人、策展人和委托人可以提供服务,并从网络中获得收入,与他们的工作量和他们的GRT委托量成正比。 From 4322fc011cbb65e2e49cadd3e24aab35757f8a69 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:50 -0500 Subject: [PATCH 141/241] New translations assemblyscript-api.mdx (Arabic) --- pages/ar/developer/assemblyscript-api.mdx | 328 +++++++++++----------- 1 file changed, 164 insertions(+), 164 deletions(-) diff --git a/pages/ar/developer/assemblyscript-api.mdx b/pages/ar/developer/assemblyscript-api.mdx index 2afa431fe8c5..535d905333f8 100644 --- a/pages/ar/developer/assemblyscript-api.mdx +++ b/pages/ar/developer/assemblyscript-api.mdx @@ -2,25 +2,25 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) +> ملاحظة: إذا أنشأت رسمًا فرعيًا قبل إصدار `graph-cli` / `graph-ts` `0.22.0` ، فأنت تستخدم إصدارًا أقدم من AssemblyScript ، نوصي بإلقاء نظرة على [ `دليل الترحيل` ](/developer/assemblyscript-migration-guide) -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +هذه الصفحة توثق APIs المضمنة التي يمكن استخدامها عند كتابة subgraph mappings. Two kinds of APIs are available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- مكتبة Graph TypeScript(`graph-ts`) +- كود تم إنشاؤه من ملفات الـ subgraph بواسطة `graph codegen`. -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +من الممكن أيضا إضافة مكتبات أخرى مثل dependencies، طالما أنها متوافقة مع [ AssemblyScript ](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. -## Installation +## التثبيت -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: +الـ Subgraphs التي تم إنشاؤها باستخدام [ `graph init` ](/developer/create-subgraph-hosted) تأتي مع dependencies مكونة مسبقا. كل ما هو مطلوب لتثبيت هذه الـ dependencies هو تشغيل أحد الأوامر التالية: ```sh yarn install # Yarn npm install # NPM ``` -If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: +إذا تم إنشاء الـ subgraph من البداية ، فسيقوم أحد الأمرين التاليين بتثبيت مكتبة Graph TypeScript كـ dependency: ```sh yarn add --dev @graphprotocol/graph-ts # Yarn @@ -29,33 +29,33 @@ npm install --save-dev @graphprotocol/graph-ts # NPM ## API Reference -The `@graphprotocol/graph-ts` library provides the following APIs: +توفر مكتبة `graphprotocol / graph-ts@` الـ APIs التالية: -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and the Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. +- واجهة برمجة تطبيقات `ethereum` للعمل مع عقود Ethereum الذكية والأحداث والكتل والإجراات وقيم Ethereum. +- واجهة برمجة تطبيقات `store` لتحميل الـ entities وحفظها من وإلى مخزن Graph Node. +- واجهة برمجة تطبيقات `log` لتسجيل الرسائل إلى خرج Graph Node ومستكشف Graph Explorer. +- واجهة برمجة تطبيقات `ipfs` لتحميل الملفات من IPFS. +- واجهة برمجة تطبيقات `json` لتحليل بيانات JSON. +- واجهة برمجة تطبيقات `crypto` لاستخدام وظائف التشفير. - Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. -### Versions +### إصدارات -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. +الـ `apiVersion` في الـ subgraph manifest تحدد إصدار الـ mapping API الذي يتم تشغيله بواسطة Graph Node للـ subgraph المحدد. الاصدار الحالي لـ mapping API هو 0.0.6. -| Version | Release notes | -|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| الاصدار | ملاحظات الإصدار | +|:-------:| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | تمت إضافة حقل `nonce` إلى كائن إجراء الـ Ethereum
تمت إضافة `baseFeePerGas` إلى كائن Ethereum Block | +| 0.0.5 | تمت ترقية AssemblyScript إلى الإصدار 0.19.10 (يرجى الاطلاع على [ `دليل الترحيل` ](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` أعيد تسميته إلى `ethereum.transaction.gasLimit` | +| 0.0.4 | تمت إضافة حقل `functionSignature` إلى كائن Ethereum SmartContractCall | +| 0.0.3 | تمت إضافةحقل `from` إلى كائن Ethereum Call
`etherem.call.address` تمت إعادة تسميته إلى `ethereum.call.to` | +| 0.0.2 | تمت إضافة حقل `input` إلى كائن إجراء Ethereum | -### Built-in Types +### الأنواع المضمنة Built-in -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). +يمكن العثور على الوثائق الخاصة بالأنواع الأساسية المضمنة في AssemblyScript في [ AssemblyScript wiki ](https://github.com/AssemblyScript/assemblyscript/wiki/Types). -The following additional types are provided by `@graphprotocol/graph-ts`. +يتم توفير الأنواع الإضافية التالية بواسطة `graphprotocol/graph-ts@`. #### ByteArray @@ -63,24 +63,24 @@ The following additional types are provided by `@graphprotocol/graph-ts`. import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray` represents an array of `u8`. +تمثل `ByteArray` مصفوفة `u8`. _Construction_ - `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromHexString(hex: string): ByteArray` - Input length must be even. البادئة بـ `0x` اختيارية. _Type conversions_ -- `toHexString(): string` - Converts to a hex string prefixed with `0x`. -- `toString(): string` - Interprets the bytes as a UTF-8 string. -- `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. +- `toHexString (): string` - تحول إلى سلسلة سداسية عشرية مسبوقة بـ `0x`. +- `toString (): string` - تترجم البايت كسلسلة UTF-8. +- `toBase58 (): string` - ترميز البايت لسلسلة base58. +- `toU32 (): u32` - يترجم البايت كـ `u32` little-endian. Throws in case of overflow. - `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. _Operators_ -- `equals(y: ByteArray): bool` – can be written as `x == y`. +- `equals(y: ByteArray): bool` – يمكن كتابتها كـ `x == y`. #### BigDecimal @@ -88,30 +88,30 @@ _Operators_ import { BigDecimal } from '@graphprotocol/graph-ts' ``` -`BigDecimal` is used to represent arbitrary precision decimals. +يستخدم `BigDecimal` للتعبير عن الكسور العشرية. _Construction_ -- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. -- `static fromString(s: string): BigDecimal` – parses from a decimal string. +- `constructor(bigInt: BigInt)` – يُنشئ `BigDecimal` من `BigInt`. +- `static fromString(s: string): BigDecimal` – يحلل من سلسلة عشرية. _Type conversions_ -- `toString(): string` – prints to a decimal string. +- `toString(): string` – يطبع سلسلة عشرية. _Math_ -- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. -- `le(y: BigDecimal): bool` – can be written as `x <= y`. -- `gt(y: BigDecimal): bool` – can be written as `x > y`. -- `ge(y: BigDecimal): bool` – can be written as `x >= y`. -- `neg(): BigDecimal` - can be written as `-x`. +- `plus(y: BigDecimal): BigDecimal` – يمكن كتابتها كـ `x + y`. +- `minus(y: BigDecimal): BigDecimal` – يمكن كتابتها كـ `x - y`. +- `times(y: BigDecimal): BigDecimal` – يمكن كتابتها كـ `x * y`. +- `div(y: BigDecimal): BigDecimal` – يمكن كتابتها كـ`x / y`. +- `equals(y: BigDecimal): bool` – يمكن كتابتها كـ `x == y`. +- `notEqual(y: BigDecimal): bool` –يمكن كتابتها كـ `x != y`. +- `lt(y: BigDecimal): bool` – يمكن كتابتها كـ `x < y`. +- `le(y: BigDecimal): bool` – يمكن كتابتها كـ `x <= y`. +- `gt(y: BigDecimal): bool` – يمكن كتابتها كـ `x > y`. +- `ge(y: BigDecimal): bool` – يمكن كتابتها كـ `x >= y`. +- `neg(): BigDecimal` - يمكن كتابتها كـ `-x`. #### BigInt @@ -119,47 +119,47 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +يستخدم `BigInt` لتمثيل أعداد صحيحة كبيرة. يتضمن ذلك قيم Ethereum من النوع `uint32` إلى `uint256` و `int64` إلى `int256`. كل شيء أدناه `uint32` ، مثل `int32` أو `uint24` أو `int8` يتم تمثيله كـ `i32`. -The `BigInt` class has the following API: +تحتوي فئة `BigInt` على API التالية: _Construction_ -- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromI32 (x: i32): BigInt` - ينشئ `BigInt` من `i32`. +- `BigInt.fromString(s: string): BigInt`– يحلل `BigInt` من سلسلة(string). +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – يترجم `bytes` باعتباره عددا صحيحا little-endian بدون إشارة. إذا كان الإدخال الخاص بك big-endian، فقم باستدعاء `.()reverse` أولا. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – يترجم `bytes` باعتباره عددا صحيحا little-endian له إشارة. إذا كان الإدخال الخاص بك big-endian، فاستدعي `.()reverse` أولا. _Type conversions_ -- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. +- `x.toHex(): string` – ترجع `BigInt` إلى سلسلة سداسية العشرية. - `x.toString(): string` – turns `BigInt` into a decimal number string. -- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. -- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. +- `x.toI32 (): i32` - ترجع `BigInt` كـ `i32` ؛ تفشل إذا كانت القيمة لا تتناسب مع `i32`. إنها لفكرة جيدة أن تتحقق أولا من `()x.isI32`. +- `x.toBigDecimal (): BigDecimal` - يحول إلى رقم عشري بدون جزء كسري. _Math_ -- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. -- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. -- `x.times(y: BigInt): BigInt` – can be written as `x * y`. -- `x.div(y: BigInt): BigInt` – can be written as `x / y`. -- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. -- `x.equals(y: BigInt): bool` – can be written as `x == y`. -- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. -- `x.lt(y: BigInt): bool` – can be written as `x < y`. -- `x.le(y: BigInt): bool` – can be written as `x <= y`. -- `x.gt(y: BigInt): bool` – can be written as `x > y`. -- `x.ge(y: BigInt): bool` – can be written as `x >= y`. -- `x.neg(): BigInt` – can be written as `-x`. -- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. -- `x.isZero(): bool` – Convenience for checking if the number is zero. -- `x.isI32(): bool` – Check if the number fits in an `i32`. -- `x.abs(): BigInt` – Absolute value. -- `x.pow(exp: u8): BigInt` – Exponentiation. -- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. -- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. -- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. -- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. +- `x.plus(y: BigInt): BigInt` – يمكن كتابتها كـ `x + y`. +- `x.minus(y: BigInt): BigInt` – يمكن كتابتها كـ `x - y`. +- `x.times(y: BigInt): BigInt` – يمكن كتابتها كـ `x * y`. +- `x.div(y: BigInt): BigInt` – يمكن كتابتها كـ `x / y`. +- `x.mod(y: BigInt): BigInt` – يمكن كتابتها كـ `x % y`. +- `x.equals(y: BigInt): bool` – يمكن كتابتها كـ `x == y`. +- `x.notEqual(y: BigInt): bool` –يمكن كتابتها كـ `x != y`. +- `x.lt(y: BigInt): bool` – يمكن كتابتها كـ `x < y`. +- `x.le(y: BigInt): bool` – يمكن كتابتها كـ `x <= y`. +- `x.gt(y: BigInt): bool` – يمكن كتابتها كـ `x > y`. +- `x.ge(y: BigInt): bool` – يمكن كتابتها كـ `x >= y`. +- `x.neg(): BigInt` – يمكن كتابتها كـ `-x`. +- `x.divDecimal (y: BigDecimal): BigDecimal` - يتم القسمة على عدد عشري ، مما يعطي نتيجة عشرية. +- `x.isZero(): bool` – ملائم للتحقق مما إذا كان الرقم صفرا. +- `x.isI32(): bool` – يتحقق مما إذا كان الرقم يناسب `i32`. +- `x.abs(): BigInt` – قيمة مطلقة. +- `x.pow(exp: u8): BigInt` – أس. +- `bitOr(x: BigInt, y: BigInt): BigInt` – يمكن كتابتها كـ `x | y`. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – يمكن كتابتها كـ `x & y`. +- `leftShift(x: BigInt, bits: u8): BigInt` –يمكن كتابتها كـ `x << y`. +- `rightShift(x: BigInt, bits: u8): BigInt` – يمكن كتابتها كـ `x >> y`. #### TypedMap @@ -167,15 +167,15 @@ _Math_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +يمكن استخدام `TypedMap` لتخزين أزواج key-value. انظر [هذا المثال ](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). -The `TypedMap` class has the following API: +تحتوي فئة `TypedMap` على API التالية: - `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` -- `map.set(key: K, value: V): void` – sets the value of `key` to `value` +- `map.set (key: K، value: V): void` - يضبط قيمة الـ `key` لـ `value` - `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map -- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map -- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not +- `map.get(key: K): V | null` – يرجع قيمة `key` أو `null` إذا كان المفتاح `` غير موجود في الخريطة +- `map.isSet(key: K): bool` – يرجع `true` إذا كان الـ `key` موجودا في الخريطة و `false` إذا كان غير موجود #### Bytes @@ -183,13 +183,13 @@ The `TypedMap` class has the following API: import { Bytes } from '@graphprotocol/graph-ts' ``` -`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. +يتم استخدام `Bytes` لتمثيل مصفوفات طول عشوائية من البايتات. يتضمن ذلك قيم إيثريوم من النوع `bytes` و `bytes32` وما إلى ذلك. -The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: +فئة `Bytes` ترث من [ Uint8Array ](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) و لذا فهو يدعم جميع وظائف `Uint8Array` ، بالإضافة إلى الـ methods الجديدة التالية: -- `b.toHex()` – returns a hexadecimal string representing the bytes in the array -- `b.toString()` – converts the bytes in the array to a string of unicode characters -- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) +- `b.toHex()` – ترع سلسلة سداسية عشرية تمثل الـ bytes في المصفوفة +- `b.toString()` – يحول الـ bytes في المصفوفة إلى سلسلة من unicode +- `b.toBase58()` – يحول قيمة Ethereum Bytes إلى ترميز base58 (يستخدم لـ IPFS hashes) #### Address @@ -197,11 +197,11 @@ The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/Assem import { Address } from '@graphprotocol/graph-ts' ``` -`Address` extends `Bytes` to represent Ethereum `address` values. +`Address` امتداد لـ`Bytes` لتمثيل قيم Ethereum `address`. -It adds the following method on top of the `Bytes` API: +إنها تضيف الـ method التالية أعلىAPI الـ `Bytes`: -- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string +- `Address.fromString(s: string): Address` – ينشئ `Address` من سلسلة سداسية عشرية ### Store API @@ -209,13 +209,13 @@ It adds the following method on top of the `Bytes` API: import { store } from '@graphprotocol/graph-ts' ``` -The `store` API allows to load, save and remove entities from and to the Graph Node store. +تسمح واجهة برمجة التطبيقات `store` بتحميل وحفظ وإزالة الكيانات من وإلى مخزن Graph Node. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. لتسهيل العمل مع هذه الكيانات ، فالأمر `graph codegen` المقدم بواسطة [ Graph CLI ](https://github.com/graphprotocol/graph-cli) ينشئ فئات الكيان ، وهي فئات فرعية من النوع المضمن `Entity` ، مع خصائص getters و setters للحقول في المخطط بالإضافة إلى methods لتحميل هذه الكيانات وحفظها. -#### Creating entities +#### إنشاء الكيانات -The following is a common pattern for creating entities from Ethereum events. +ما يلي هو نمط شائع لإنشاء كيانات من أحداث Ethereum. ```typescript // Import the Transfer event class generated from the ERC20 ABI @@ -241,13 +241,13 @@ export function handleTransfer(event: TransferEvent): void { } ``` -When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. +عند مواجهة حدث `Transfer` أثناء معالجة السلسلة ، يتم تمريره إلى معالج الحدث `handleTransfer` باستخدام نوع `Transfer` المولدة (الاسم المستعار هنا لـ `TransferEvent` لتجنب تعارض التسمية مع نوع الكيان). يسمح هذا النوع بالوصول إلى البيانات مثل الإجراء الأصلي للحدث وبارامتراته. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +يجب أن يكون لكل كيان ID فريد لتجنب التضارب مع الكيانات الأخرى. من الشائع إلى حد ما أن تتضمن بارامترات الأحداث معرفا فريدا يمكن استخدامه. ملاحظة: استخدام hash الـ الإجراء كـ ID يفترض أنه لا توجد أحداث أخرى في نفس الإجراء تؤدي إلى إنشاء كيانات بهذا الـ hash كـ ID. -#### Loading entities from the store +#### تحميل الكيانات من المخزن -If an entity already exists, it can be loaded from the store with the following: +إذا كان الكيان موجودا بالفعل ، فيمكن تحميله من المخزن بالتالي: ```typescript let id = event.transaction.hash.toHex() // or however the ID is constructed @@ -259,18 +259,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +نظرا لأن الكيان قد لا يكون موجودا في المتجر ، فإن method `load` تُرجع قيمة من النوع `Transfer | null`. وبالتالي قد يكون من الضروري التحقق من حالة `null` قبل استخدام القيمة. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> ** ملاحظة: ** تحميل الكيانات ضروري فقط إذا كانت التغييرات التي تم إجراؤها في الـ mapping تعتمد على البيانات السابقة للكيان. انظر القسم التالي للتعرف على الطريقتين لتحديث الكيانات الموجودة. -#### Updating existing entities +#### تحديث الكيانات الموجودة -There are two ways to update an existing entity: +هناك طريقتان لتحديث كيان موجود: -1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +1. حمل الكيان بـ `Transfer.load (id)` على سبيل المثال، قم بتعيين الخصائص على الكيان ، ثم `()save.` للمخزن. +2. ببساطة أنشئ الكيان بـ `new Transfer(id)` على سبيل المثال، قم بتعيين الخصائص على الكيان ، ثم `()save.` للمخزن. إذا كان الكيان موجودا بالفعل ، يتم دمج التغييرات فيه. -Changing properties is straight forward in most cases, thanks to the generated property setters: +يتم تغيير الخصائص بشكل مباشر في معظم الحالات ، وذلك بفضل خاصية الـ setters التي تم إنشاؤها: ```typescript let transfer = new Transfer(id) @@ -279,16 +279,16 @@ transfer.to = ... transfer.amount = ... ``` -It is also possible to unset properties with one of the following two instructions: +من الممكن أيضا إلغاء الخصائص بإحدى التعليمات التالية: ```typescript transfer.from.unset() transfer.from = null ``` -This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. +يعمل هذا فقط مع الخصائص الاختيارية ، أي الخصائص التي تم التصريح عنها بدون `!` في GraphQL. كمثالان `owner: Bytes` أو `amount: BigInt`. -Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. +يعد تحديث خصائص المصفوفة أكثر تعقيدا ، حيث يؤدي الحصول على مصفوفة من كيان إلى إنشاء نسخة من تلك المصفوفة. هذا يعني أنه يجب تعيين خصائص المصفوفة مرة أخرى بشكل صريح بعد تغيير المصفوفة. التالي يفترض `entity` به حقل `أرقام: [BigInt!]!`. ```typescript // This won't work @@ -302,28 +302,28 @@ entity.numbers = numbers entity.save() ``` -#### Removing entities from the store +#### إزالة الكيانات من المخزن -There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: +لا توجد حاليا طريقة لإزالة كيان عبر الأنواع التي تم إنشاؤها. بدلاً من ذلك ، تتطلب إزالة الكيان تمرير اسم نوع الكيان و ID الكيان إلى `store.remove`: ```typescript import { store } from '@graphprotocol/graph-ts' ... -let id = event.transaction.hash.toHex() +()let id = event.transaction.hash.toHex store.remove('Transfer', id) ``` ### Ethereum API -The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. +يوفر Ethereum API الوصول إلى العقود الذكية ومتغيرات الحالة العامة ووظائف العقد والأحداث والإجراءات والكتل وتشفير / فك تشفير بيانات Ethereum. -#### Support for Ethereum Types +#### دعم أنواع الإيثيريوم -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +كما هو الحال مع الكيانات ، `graph codegen` ينشئ فئات لجميع العقود الذكية والأحداث المستخدمة في الـ subgraph. لهذا ، يجب أن يكون ABI العقد جزءا من مصدر البيانات في subgraph manifest. عادة ما يتم تخزين ملفات ABI في مجلد `/abis`. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +باستخدام الفئات التي تم إنشاؤها ، تحدث التحويلات بين أنواع Ethereum و [ الأنواع المضمنة ](#built-in-types) خلف الكواليس بحيث لا يضطر منشؤوا الـ subgraph إلى القلق بشأنها. -The following example illustrates this. Given a subgraph schema like +يوضح المثال التالي هذا. مخطط subgraph معطى مثل ```graphql type Transfer @entity { @@ -333,7 +333,7 @@ type Transfer @entity { } ``` -and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: +و توقيع الحدث `Transfer(address,address,uint256)` على Ethereum ، قيم `from` ، `to` و `amount` من النوع `address` و `address` و `uint256` يتم تحويلها إلى `Address` و `BigInt` ، مما يسمح بتمريرها إلى خصائص `!Bytes` و `!BigInt` للكيان `Transfer`: ```typescript let id = event.transaction.hash.toHex() @@ -344,9 +344,9 @@ transfer.amount = event.params.amount transfer.save() ``` -#### Events and Block/Transaction Data +#### الأحداث وبيانات الكتلة/ الإجراء -Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): +أحداث Ethereum التي تم تمريرها إلى معالجات الأحداث ، مثل حدث `Transfer` في الأمثلة السابقة ، لا توفر فقط الوصول إلى بارامترات الحدث ولكن أيضا إلى الإجراء الأصلي والكتلة التي تشكل جزءا منها. يمكن الحصول على البيانات التالية من `event` instances (هذه الفئات هي جزء من وحدة الـ `ethereum` في `graph-ts`): ```typescript class Event { @@ -390,11 +390,11 @@ class Transaction { } ``` -#### Access to Smart Contract State +#### الوصول إلى حالة العقد الذكي Smart Contract -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +يشتمل الكود أيضا الذي تم إنشاؤه بواسطة `graph codegen` على فئات للعقود الذكية المستخدمة في الـ subgraph. يمكن استخدامها للوصول إلى متغيرات الحالة العامة واستدعاء دوال العقد في الكتلة الحالية. -A common pattern is to access the contract from which an event originates. This is achieved with the following code: +النمط الشائع هو الوصول إلى العقد الذي ينشأ منه الحدث. يتم تحقيق ذلك من خلال الكود التالي: ```typescript // Import the generated contract class @@ -411,13 +411,13 @@ export function handleTransfer(event: Transfer) { } ``` -As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. +طالما أن `ERC20Contract` في الـ Ethereum له دالة عامة للقراءة فقط تسمى `symbol` ، فيمكن استدعاؤها بـ `()symbol.`. بالنسبة لمتغيرات الحالة العامة ، يتم إنشاء method بنفس الاسم تلقائيا. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +أي عقد آخر يمثل جزءا من الـ subgraph يمكن استيراده من الكود الذي تم انشاؤه ويمكن ربطه بعنوان صالح. #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +إذا كان من الممكن التراجع عن methods القراءة فقط لعقدك ، فيجب عليك التعامل مع ذلك عن طريق استدعاء method العقد التي تم انشاؤها والمسبوقة بـ على سبيل المثال ، يكشف عقد Gravity عن method `gravatarToOwner`. سيكون هذا الكود قادرا على معالجة التراجع في ذلك الـ method: ```typescript let gravity = Gravity.bind(event.address) @@ -429,11 +429,11 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +لاحظ أن Graph node المتصلة بعميل Geth أو Infura قد لا تكتشف جميع المرتجعات ، إذا كنت تعتمد على ذلك ، فإننا نوصي باستخدام Graph node المتصلة بعميل Parity. #### Encoding/Decoding ABI -Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. +يمكن تشفير البيانات وفك تشفيرها وفقا لتنسيق تشفير ABI الـ Ethereum باستخدام دالتي `encode` و `decode` في الوحدة الـ `ethereum`. ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -450,11 +450,11 @@ let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` -For more information: +لمزيد من المعلومات: - [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) -- Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) -- More [complex example](https://github.com/graphprotocol/graph-node/blob/6a7806cc465949ebb9e5b8269eeb763857797efc/tests/integration-tests/host-exports/src/mapping.ts#L72). +- تشفير/فك تشفير [Rust library/CLI](https://github.com/rust-ethereum/ethabi) +- [أمثلة معقدة](https://github.com/graphprotocol/graph-node/blob/6a7806cc465949ebb9e5b8269eeb763857797efc/tests/integration-tests/host-exports/src/mapping.ts#L72) أكثر. ### Logging API @@ -462,17 +462,17 @@ For more information: import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +تسمح واجهة برمجة التطبيقات `log` لـ subgraphs بتسجيل المعلومات إلى الخرج القياسي لـ Graph Node بالإضافة إلى Graph Explorer. يمكن تسجيل الرسائل باستخدام مستويات سجل مختلفة. بنية سلسلة التنسيق الأساسي يتم توفيرها لتكوين رسائل السجل من argument. -The `log` API includes the following functions: +تتضمن واجهة برمجة التطبيقات `log` الدوال التالية: -- `log.debug(fmt: string, args: Array): void` - logs a debug message. -- `log.info(fmt: string, args: Array): void` - logs an informational message. -- `log.warning(fmt: string, args: Array): void` - logs a warning. -- `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.debug(fmt: string, args: Array): void` - تسجل رسالة debug. +- `log.info(fmt: string, args: Array): void` - تسجل رسالة اعلامية. +- `log.warning(fmt: string, args: Array): void` - تسجل تحذير. +- `log.error(fmt: string, args: Array): void` - تسجل رسالة خطأ. +- `log.critical(fmt: string, args: Array): void` – تسجل رسالة حرجة _و_ وتنهي الـ subgraph. -The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. +واجهة برمجة التطبيقات `log` تأخذ تنسيق string ومصفوفة من قيم string. ثم يستبدل placeholders بقيم string من المصفوفة. يتم استبدال placeholder `{}` الأول بالقيمة الأولى في المصفوفة ، ويتم استبدال placeholder `{}` الثاني بالقيمة الثانية وهكذا. ```typescript log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) @@ -482,7 +482,7 @@ log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue. ##### Logging a single value -In the example below, the string value "A" is passed into an array to become`['A']` before being logged: +في المثال أدناه ، يتم تمرير قيمة السلسلة "A" إلى مصفوفة لتصبح `['A']` قبل تسجيلها: ```typescript let myValue = 'A' @@ -495,7 +495,7 @@ export function handleSomeEvent(event: SomeEvent): void { ##### Logging a single entry from an existing array -In the example below, only the first value of the argument array is logged, despite the array containing three values. +في المثال أدناه ، يتم تسجيل القيمة الأولى فقط لـ argument المصفوفة، على الرغم من احتواء المصفوفة على ثلاث قيم. ```typescript let myArray = ['A', 'B', 'C'] @@ -508,7 +508,7 @@ export function handleSomeEvent(event: SomeEvent): void { #### Logging multiple entries from an existing array -Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. +يتطلب كل إدخال في arguments المصفوفة placeholder خاص به `{}` في سلسلة رسالة السجل. يحتوي المثال أدناه على ثلاثة placeholders `{}` في رسالة السجل. لهذا السبب ، يتم تسجيل جميع القيم الثلاث في `myArray`. ```typescript let myArray = ['A', 'B', 'C'] @@ -521,7 +521,7 @@ export function handleSomeEvent(event: SomeEvent): void { ##### Logging a specific entry from an existing array -To display a specific value in the array, the indexed value must be provided. +لعرض قيمة محددة في المصفوفة ، يجب توفير القيمة المفهرسة. ```typescript export function handleSomeEvent(event: SomeEvent): void { @@ -532,7 +532,7 @@ export function handleSomeEvent(event: SomeEvent): void { ##### Logging event information -The example below logs the block number, block hash and transaction hash from an event: +يسجل المثال أدناه رقم الكتلة و hash الكتلة و hash الإجراء من حدث: ```typescript import { log } from '@graphprotocol/graph-ts' @@ -552,9 +552,9 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +تقوم العقود الذكية أحيانا بإرساء ملفات IPFS على السلسلة. يسمح هذا للـ mappings بالحصول على IPFS hashes من العقد وقراءة الملفات المقابلة من IPFS. سيتم إرجاع بيانات الملف كـ `Bytes` ، والتي تتطلب عادة مزيدا من المعالجة ، على سبيل المثال مع واجهة برمجة التطبيقات `json` الموثقة لاحقا في هذه الصفحة. -Given an IPFS hash or path, reading a file from IPFS is done as follows: +IPFS hash أو مسار معطى، تتم قراءة ملف من IPFS على النحو التالي: ```typescript // Put this inside an event handler in the mapping @@ -567,9 +567,9 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. +** ملاحظة: ** `ipfs.cat` ليست إجبارية في الوقت الحالي. لهذا السبب ، من المفيد دائما التحقق من نتيجة `null`. إذا تعذر استرداد الملف عبر شبكة Ipfs قبل انتهاء مهلة الطلب ، فسيعود `null`. لضمان إمكانية استرداد الملفات ، يجب تثبيتها في IPFS node التي تتصل بها Graph Node. على [الخدمة المستضافة ](https://thegraph.com/hosted-service) ، هذا هو [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). راجع قسم [تثبيت IPFS](/developer/create-subgraph-hosted#ipfs-pinning) لمزيد من المعلومات. -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +من الممكن أيضا معالجة الملفات الأكبر حجما بطريقة متدفقة باستخدام `ipfs.map`. تتوقع الدالة الـ hash أو مسارا لملف IPFS واسم الـ callback والـ flags لتعديل سلوكه: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -599,9 +599,9 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +الـ flag الوحيد المدعوم حاليا هو `json` ، والذي يجب تمريره إلى `ipfs.map`. باستخدام flag الـ `json` ، يجب أن يتكون ملف IPFS من سلسلة من قيم JSON ، قيمة واحدة لكل سطر. سيؤدي استدعاء `ipfs.map` إلى قراءة كل سطر في الملف ، وإلغاء تسلسله إلى `JSONValue` واستدعاء الـ callback لكل منها. يمكن لـ callback بعد ذلك استخدام عمليات الكيان لتخزين البيانات من `JSONValue`. يتم تخزين تغييرات الكيان فقط عندما ينتهي المعالج الذي يسمى `ipfs.map` بنجاح ؛ في غضون ذلك ، يتم الاحتفاظ بها في الذاكرة ، وبالتالي يكون حجم الملف الذي يمكن لـ `ipfs.map` معالجته يكون محدودا. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +عند النجاح ، يرجع `ipfs.map` `بـ void`. إذا تسبب أي استدعاء لـ callback في حدوث خطأ ، فسيتم إحباط المعالج الذي استدعى `ipfs.map` ، ويتم وضع علامة على الـ subgraph على أنه فشل. ### Crypto API @@ -609,7 +609,7 @@ On success, `ipfs.map` returns `void`. If any invocation of the callback causes import { crypto } from '@graphprotocol/graph-ts' ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: +توفر واجهة برمجة تطبيقات `crypto` دوال التشفير للاستخدام في mappings. الآن ، يوجد واحد فقط: - `crypto.keccak256(input: ByteArray): ByteArray` @@ -619,14 +619,14 @@ The `crypto` API makes a cryptographic functions available for use in mappings. import { json, JSONValueKind } from '@graphprotocol/graph-ts' ``` -JSON data can be parsed using the `json` API: +يمكن تحليل بيانات JSON باستخدام `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence -- `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromBytes(data: Bytes): JSONValue` – يحول بيانات JSON من مصفوفة `Bytes` +- `json.try_fromBytes(data: Bytes): Result` – إصدار آمن من `json.fromBytes` ، يقوم بإرجاع متغير خطأ إذا فشل التحليل +- `json.fromString(data: Bytes): JSONValue` – يحلل بيانات JSON من UTF-8 `String` صالح +- `json.try_fromString(data: Bytes): Result` – اصدار آمن من `json.fromString`, يقوم بإرجاع متغير خطأ إذا فشل التحليل -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +توفر فئة `JSONValue` طريقة لسحب القيم من مستند JSON عشوائي. نظرا لأن قيم JSON يمكن أن تكون منطقية وأرقاما ومصفوفات وغيرها، فإن `JSONValue` يأتي مع خاصية `kind` للتحقق من نوع القيمة: ```typescript let value = json.fromBytes(...) @@ -635,22 +635,22 @@ if (value.kind == JSONValueKind.BOOL) { } ``` -In addition, there is a method to check if the value is `null`: +بالإضافة إلى ذلك ، هناك method للتحقق مما إذا كانت القيمة `null`: - `value.isNull(): boolean` -When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: +عندما يكون نوع القيمة مؤكدا ، يمكن تحويلها إلى [ نوع مضمن ](#built-in-types) باستخدام إحدى الـ methods التالية: - `value.toBool(): boolean` - `value.toI64(): i64` - `value.toF64(): f64` - `value.toBigInt(): BigInt` - `value.toString(): string` -- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) +- `value.toArray(): Array` - (ثم قم بتحويل `JSONValue` بإحدى الـ methods الخمس المذكورة أعلاه) ### Type Conversions Reference -| Source(s) | Destination | Conversion function | +| المصدر(المصادر) | الغاية | دالة التحويل | | -------------------- | -------------------- | ---------------------------- | | Address | Bytes | none | | Address | ID | s.toHexString() | @@ -690,7 +690,7 @@ When the type of a value is certain, it can be converted to a [built-in type](#b ### Data Source Metadata -You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: +يمكنك فحص عنوان العقد والشبكة وسياق مصدر البيانات الذي استدعى المعالج من خلال `dataSource` namespace: - `dataSource.address(): Address` - `dataSource.network(): string` @@ -698,7 +698,7 @@ You can inspect the contract address, network and context of the data source tha ### Entity and DataSourceContext -The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: +تحتوي فئة `Entity` الأساسية والفئة الفرعية `DataSourceContext` على مساعدين لتعيين الحقول والحصول عليها ديناميكيا: - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From 87db43c5a1cc4fa79b598fa090c0fe2b566bf3aa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:51 -0500 Subject: [PATCH 142/241] New translations create-subgraph-hosted.mdx (Spanish) --- pages/es/developer/create-subgraph-hosted.mdx | 418 +++++++++--------- 1 file changed, 209 insertions(+), 209 deletions(-) diff --git a/pages/es/developer/create-subgraph-hosted.mdx b/pages/es/developer/create-subgraph-hosted.mdx index d31a88ea52a4..d6bb245d55c1 100644 --- a/pages/es/developer/create-subgraph-hosted.mdx +++ b/pages/es/developer/create-subgraph-hosted.mdx @@ -1,18 +1,18 @@ --- -title: Create a Subgraph +title: Crear un Subgrafo --- -Before being able to use the Graph CLI, you need to create your subgraph in [Subgraph Studio](https://thegraph.com/studio). You will then be able to setup your subgraph project and deploy it to the platform of your choice. Note that **subgraphs that do not index Ethereum mainnet will not be published to The Graph Network**. +Antes de poder utilizar el Graph CLI, tienes que crear tu subgrafo en [Subgraph Studio](https://thegraph.com/studio). A continuación, podrás configurar tu proyecto de subgrafo y desplegarlo en la plataforma que elijas. Ten en cuenta que **los subgrafos que no indexen Ethereum mainnet no se publicarán en The Graph Network**. -The `graph init` command can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks, or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports. +El comando `graph init` se puede utilizar para configurar un nuevo proyecto de subgrafo, ya sea desde un contrato existente en cualquiera de las redes públicas de Ethereum, o desde un subgrafo de ejemplo. Este comando se puede utilizar para crear un subgrafo en el Subgraph Studio pasando `graph init --product subgraph-studio`. Si ya tienes un contrato inteligente desplegado en la red principal de Ethereum o en una de las redes de prueba, arrancar un nuevo subgrafo a partir de ese contrato puede ser una buena manera de empezar. Pero primero, un poco sobre las redes que admite The Graph. -## Redes admitidas +## Redes Que Admite -The Graph Network supports subgraphs indexing mainnet Ethereum: +The Graph Network admite subgrafos que indexan la red principal de Ethereum: - `mainnet` -**Additional Networks are supported in beta on the Hosted Service**: +**El Servicio Alojado (Hosted Service) admite Redes Adicionales en la versión beta**: - `mainnet` - `kovan` @@ -44,13 +44,13 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: - `aurora` - `aurora-testnet` -The Graph's Hosted Service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints. Newer networks will be marked as being in beta until the network has proven itself in terms of stability, reliability, and scalability. During this beta period, there is risk of downtime and unexpected behaviour. +El Hosted Service (servicio alojado) de The Graph se basa en la estabilidad y la fiabilidad de las tecnologías subyacentes, es decir, los endpoints JSON RPC proporcionados. Las redes más nuevas se marcarán como beta hasta que la red haya demostrado su estabilidad, fiabilidad y escalabilidad. Durante este período beta, existe el riesgo de que se produzcan tiempos de inactividad y comportamientos inesperados. -Remember that you will **not be able** to publish a subgraph that indexes a non-mainnet network to the decentralized Graph Network in [Subgraph Studio](/studio/subgraph-studio). +Recuerda que **no podrás** publicar un subgrafo que indexe una red no-mainnet a la Graph Network descentralizada en [Subgraph Studio](/studio/subgraph-studio). -## From An Existing Contract +## Desde un Contrato Existente -The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. +El siguiente comando crea un subgrafo que indexa todos los eventos de un contrato existente. Intenta obtener la ABI del contrato desde Etherscan y vuelve a solicitar una ruta de archivo local. Si falta alguno de los argumentos opcionales, te lleva a través de un formulario interactivo. ```sh graph init \ @@ -61,23 +61,23 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +El `` es el ID de tu subgrafo en Subgraph Studio, y se puede encontrar en la página de detalles de tu subgrafo. -## From An Example Subgraph +## Desde un Subgrafo de Ejemplo -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +El segundo modo que admite `graph init` es la creación de un nuevo proyecto a partir de un subgrafo de ejemplo. El siguiente comando lo hace: ``` graph init --studio ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +El subgrafo de ejemplo se basa en el contrato Gravity de Dani Grant que gestiona los avatares de los usuarios y emite `NewGravatar` o `UpdateGravatar` cada vez que se crean o actualizan los avatares. El subgrafo maneja estos eventos escribiendo entidades `Gravatar` en el almacén de Graph Node y asegurándose de que éstas se actualicen según los eventos. Las siguientes secciones repasarán los archivos que componen el manifiesto del subgrafo para este ejemplo. -## The Subgraph Manifest +## El Manifiesto de Subgrafo -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +El manifiesto del subgrafo `subgraph.yaml` define los contratos inteligentes que indexa tu subgrafo, a qué eventos de estos contratos prestar atención, y cómo mapear los datos de los eventos a las entidades que Graph Node almacena y permite consultar. La especificación completa de los manifiestos de subgrafos puede encontrarse en [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph, `subgraph.yaml` is: +Para este subgrafo de ejemplo, `subgraph.yaml` es: ```yaml specVersion: 0.0.4 @@ -118,59 +118,59 @@ dataSources: file: ./src/mapping.ts ``` -The important entries to update for the manifest are: +Las entradas importantes a actualizar para el manifiesto son: -- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service. +- `description`: una descripción legible para el ser humano de lo que es el subgrafo. Esta descripción es mostrada por The Graph Explorer cuando el subgrafo se despliega en el Servicio Alojado. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by the Graph Explorer. +- `repository`: la URL del repositorio donde se encuentra el manifiesto del subgrafo. Esto también lo muestra The Graph Explorer. -- `features`: a list of all used [feature](#experimental-features) names. +- `features`: una lista de todos los nombres de las [feature](#experimental-features) usadas. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the abi of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: la address del contrato inteligente, las fuentes del subgrafo, y el abi del contrato inteligente a utilizar. La address es opcional; omitirla permite indexar los eventos coincidentes de todos los contratos. -- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases we suggest using the block in which the contract was created. +- `dataSources.source.startBlock`: el número opcional del bloque desde el que la fuente de datos comienza a indexar. En la mayoría de los casos, sugerimos utilizar el bloque en el que se creó el contrato. -- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the the schema.graphql file. +- `dataSources.mapping.entities`: las entidades que la fuente de datos escribe en el almacén. El esquema de cada entidad se define en el archivo schema.graphql. -- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. +- `dataSources.mapping.abis`: uno o más archivos ABI con nombre para el contrato fuente, así como cualquier otro contrato inteligente con el que interactúes desde los mapeos. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: enumera los eventos de contratos inteligentes a los que reacciona este subgrafo y los handlers en el mapeo -./src/mapping.ts en el ejemplo- que transforman estos eventos en entidades en el almacén. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: enumera las funciones de contrato inteligente a las que reacciona este subgrafo y los handlers en el mapeo que transforman las entradas y salidas a las llamadas de función en entidades en el almacén. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional filter can be provided with the following kinds: call`. A`call` filter will run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: enumera los bloques a los que reacciona este subgrafo y los handlers en el mapeo que se ejecutan cuando un bloque se agrega a la cadena. Sin un filtro, el handler de bloque se ejecutará en cada bloque. Se puede proporcionar un filtro opcional con los siguientes tipos: call`. Un filtro`call` ejecutará el handler si el bloque contiene al menos una llamada al contrato de la fuente de datos. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +Un único subgrafo puede indexar datos de múltiples contratos inteligentes. Añade una entrada por cada contrato del que haya que indexar datos a la array `dataSources`. -The triggers for a data source within a block are ordered using the following process: +Los disparadores (triggers) de una fuente de datos dentro de un bloque se ordenan mediante el siguiente proceso: -1. Event and call triggers are first ordered by transaction index within the block. -2. Event and call triggers with in the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. Los disparadores de eventos y llamadas se ordenan primero por el índice de la transacción dentro del bloque. +2. Los disparadores de eventos y llamadas dentro de la misma transacción se ordenan siguiendo una convención: primero los disparadores de eventos y luego los de llamadas, respetando cada tipo el orden en que se definen en el manifiesto. +3. Los disparadores de bloque se ejecutan después de los disparadores de eventos y llamadas, en el orden en que están definidos en el manifiesto. -These ordering rules are subject to change. +Estas normas de orden están sujetas a cambios. -### Getting The ABIs +### Obtención de ABIs -The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: +Los archivos ABI deben coincidir con tu(s) contrato(s). Hay varias formas de obtener archivos ABI: -- If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or using solc to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. +- Si estás construyendo tu propio proyecto, es probable que tengas acceso a tus ABIs más actuales. +- Si estás construyendo un subgrafo para un proyecto público, puedes descargar ese proyecto en tu computadora y obtener la ABI utilizando [`truffle compile`](https://truffleframework.com/docs/truffle/overview) or usando solc para compilar. +- También puedes encontrar la ABI en [Etherscan](https://etherscan.io/), pero no siempre es fiable, ya que la ABI que se sube allí puede estar desactualizada. Asegúrate de que tienes la ABI correcta, de lo contrario la ejecución de tu subgrafo fallará. -## The GraphQL Schema +## El Esquema GraphQL -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/developer/graphql-api) section. +El esquema para tu subgrafo está en el archivo `schema.graphql`. Los esquemas de GraphQL se definen utilizando el lenguaje de definición de interfaces de GraphQL. Si nunca has escrito un esquema GraphQL, es recomendable que consultes este manual sobre el sistema de tipos GraphQL. La documentación de referencia para los esquemas de GraphQL se puede encontrar en la sección [GraphQL API](/developer/graphql-api). -## Defining Entities +## Definir Entidades -Before defining entities, it is important to take a step back and think about how your data is structured and linked. All queries will be made against the data model defined in the subgraph schema and the entities indexed by the subgraph. Because of this, it is good to define the subgraph schema in a way that matches the needs of your dapp. It may be useful to imagine entities as "objects containing data", rather than as events or functions. +Antes de definir las entidades, es importante dar un paso atrás y pensar en cómo están estructurados y vinculados los datos. Todas las consultas se harán contra el modelo de datos definido en el esquema del subgrafo y las entidades indexadas por el subgrafo. Debido a esto, es bueno definir el esquema del subgrafo de una manera que coincida con las necesidades de tu dapp. Puede ser útil imaginar las entidades como "objetos que contienen datos", más que como eventos o funciones. -With The Graph, you simply define entity types in `schema.graphql`, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an `@entity` directive. +Con The Graph, simplemente defines los tipos de entidad en `schema.graphql`, y Graph Node generará campos de nivel superior para consultar instancias individuales y colecciones de ese tipo de entidad. Cada tipo que deba ser una entidad debe ser anotado con una directiva `@entity`. -### Good Example +### Buen Ejemplo -The `Gravatar` entity below is structured around a Gravatar object and is a good example of how an entity could be defined. +La entidad `Gravatar` que aparece a continuación está estructurada en torno a un objeto Gravatar y es un buen ejemplo de cómo podría definirse una entidad. ```graphql type Gravatar @entity { @@ -182,9 +182,9 @@ type Gravatar @entity { } ``` -### Bad Example +### Mal Ejemplo -The example `GravatarAccepted` and `GravatarDeclined` entities below are based around events. It is not recommended to map events or function calls to entities 1:1. +El ejemplo las entidades `GravatarAccepted` y `GravatarDeclined` que aparecen a continuación se basan en eventos. No se recomienda asignar eventos o llamadas a funciones a entidades 1:1. ```graphql type GravatarAccepted @entity { @@ -202,35 +202,35 @@ type GravatarDeclined @entity { } ``` -### Optional and Required Fields +### Campos Opcionales y Obligatorios -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If a required field is not set in the mapping, you will receive this error when querying the field: +Los campos de la entidad pueden definirse como obligatorios u opcionales. Los campos obligatorios se indican con el `!` en el esquema. Si un campo obligatorio no está establecido en la asignación, recibirá este error al consultar el campo: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which is of type `ID!` (string). The `id` field serves as the primary key, and needs to be unique among all entities of the same type. +Cada entidad debe tener un campo `id`, que es de tipo `ID!` (string). El campo `id` sirve de clave primaria y debe ser único entre todas las entidades del mismo tipo. -### Built-In Scalar Types +### Tipos de Scalars incorporados -#### GraphQL Supported Scalars +#### GraphQL admite Scalars -We support the following scalars in our GraphQL API: +Admitimos los siguientes scalars en nuestra API GraphQL: -| Type | Description | -| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `ID` | Stored as a `string`. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to have size of 32 bytes. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| Tipo | Descripción | +| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y addresses de Ethereum. | +| `ID` | Almacenado como un `string`. | +| `String` | Scalar para valores `string`. Los caracteres null no se admiten y se eliminan automáticamente. | +| `Boolean` | Scalar para valores `boolean`. | +| `Int` | The GraphQL spec define `Int` para tener un tamano de 32 bytes. | +| `BigInt` | Números enteros grandes. Usados para los tipos `uint32`, `int64`, `uint64`, ..., `uint256` de Ethereum. Nota: Todo debajo de `uint32`, como `int32`, `uint24` o `int8` es representado como `i32`. | +| `BigDecimal` | `BigDecimal` Decimales de alta precisión representados como un signo y un exponente. El rango de exponentes va de -6143 a +6144. Redondeado a 34 dígitos significativos. | #### Enums -You can also create enums within a schema. Enums have the following syntax: +También puedes crear enums dentro de un esquema. Los Enums tienen la siguiente sintaxis: ```graphql enum TokenStatus { @@ -240,19 +240,19 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner`. The example below demonstrates what the Token entity would look like with an enum field: +Una vez definido el enum en el esquema, puedes utilizar la representación del string del valor del enum para establecer un campo enum en una entidad. Por ejemplo, puedes establecer el `tokenStatus` a `SecondOwner` definiendo primero tu entidad y posteriormente estableciendo el campo con `entity.tokenStatus = "SecondOwner`. El ejemplo siguiente muestra el aspecto de la entidad Token con un campo enum: -More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). +Puedes encontrar más detalles sobre la escritura de enums en la [GraphQL documentation](https://graphql.org/learn/schema/). -#### Entity Relationships +#### Relaciones entre Entidades -An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. +Una entidad puede tener una relación con una o más entidades de tu esquema. Estas relaciones pueden ser recorridas en tus consultas. Las relaciones en The Graph son unidireccionales. Es posible simular relaciones bidireccionales definiendo una relación unidireccional en cada "extremo" de la relación. -Relationships are defined on entities just like any other field except that the type specified is that of another entity. +Las relaciones se definen en las entidades como cualquier otro campo, salvo que el tipo especificado es el de otra entidad. -#### One-To-One Relationships +#### Relaciones Uno a Uno -Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: +Define un tipo de entidad `Transaction` con una relación opcional de uno a uno con un tipo de entidad `TransactionReceipt`: ```graphql type Transaction @entity { @@ -266,9 +266,9 @@ type TransactionReceipt @entity { } ``` -#### One-To-Many Relationships +#### Relaciones Uno-a-Muchos -Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: +Define un tipo de entidad `TokenBalance` con una relación requerida de uno a varios con un tipo de entidad Token: ```graphql type Token @entity { @@ -282,15 +282,15 @@ type TokenBalance @entity { } ``` -#### Reverse Lookups +#### Búsquedas Inversas -Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. +Se pueden definir búsquedas inversas en una entidad a través del campo `@derivedFrom`. Esto crea un campo virtual en la entidad que puede ser consultado pero que no puede ser establecido manualmente a través de la API de mapeo. Más bien, se deriva de la relación definida en la otra entidad. Para este tipo de relaciones, rara vez tiene sentido almacenar ambos lados de la relación, y tanto la indexación como el rendimiento de la consulta serán mejores cuando sólo se almacene un lado y el otro se derive. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +En el caso de las relaciones uno a muchos, la relación debe almacenarse siempre en el lado "uno", y el lado "muchos" debe derivarse siempre. Almacenar la relación de esta manera, en lugar de almacenar una array de entidades en el lado "muchos", resultará en un rendimiento dramáticamente mejor tanto para la indexación como para la consulta del subgrafo. En general, debe evitarse, en la medida de lo posible, el almacenamiento de arrays de entidades. -#### Example +#### Ejemplo -We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: +Podemos hacer que los balances de un token sean accesibles desde el token derivando un campo `tokenBalances`: ```graphql type Token @entity { @@ -305,13 +305,13 @@ type TokenBalance @entity { } ``` -#### Many-To-Many Relationships +#### Relaciones de Muchos a Muchos -For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. +Para las relaciones de muchos a muchos, como los usuarios pueden pertenecer a cualquier número de organizaciones, la forma más directa, pero generalmente no la más eficaz, de modelar la relación es como un array en cada una de las dos entidades implicadas. Si la relación es simétrica, sólo es necesario almacenar un lado de la relación y el otro puede derivarse. -#### Example +#### Ejemplo -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +Define una búsqueda inversa desde un tipo de entidad `User` a un tipo de entidad `Organization`. En el ejemplo siguiente, esto se consigue buscando el atributo `members` desde la entidad `Organization`. En las consultas, el campo `organizations` en `User` se resolverá buscando todas las entidades de `Organization` que incluyan el ID del usuario. ```graphql type Organization @entity { @@ -327,7 +327,7 @@ type User @entity { } ``` -A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like +Una forma más eficaz de almacenar esta relación es a través de una tabla de asignación que tiene una entrada para cada par `User` / `Organization` con un esquema como ```graphql type Organization @entity { @@ -349,7 +349,7 @@ type UserOrganization @entity { } ``` -This approach requires that queries descend into one additional level to retrieve, for example, the organizations for users: +Este enfoque requiere que las consultas desciendan a un nivel adicional para recuperar, por ejemplo, las organizaciones para los usuarios: ```graphql query usersWithOrganizations { @@ -364,11 +364,11 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +Esta forma más elaborada de almacenar las relaciones de muchos a muchos se traducirá en menos datos almacenados para el subgrafo y, por tanto, en un subgrafo que suele ser mucho más rápido de indexar y consultar. -#### Adding comments to the schema +#### Agregar comentarios al esquema -As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below: +Según la especificación GraphQL, se pueden añadir comentarios por encima de los atributos de entidad del esquema utilizando comillas dobles `""`. Esto se ilustra en el siguiente ejemplo: ```graphql type MyFirstEntity @entity { @@ -378,13 +378,13 @@ type MyFirstEntity @entity { } ``` -## Defining Fulltext Search Fields +## Definición de Campos de Búsqueda de Texto Completo -Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing to the indexed text data. +Las consultas de búsqueda de texto completo filtran y clasifican las entidades basándose en una entrada de búsqueda de texto. Las consultas de texto completo pueden devolver coincidencias de palabras similares procesando el texto de la consulta en stems antes de compararlo con los datos del texto indexado. -A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. +La definición de una consulta de texto completo incluye el nombre de la consulta, el diccionario lingüístico utilizado para procesar los campos de texto, el algoritmo de clasificación utilizado para ordenar los resultados y los campos incluidos en la búsqueda. Cada consulta de texto completo puede abarcar varios campos, pero todos los campos incluidos deben ser de un solo tipo de entidad. -To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. +Para agregar una consulta de texto completo, incluye un tipo `_Schema_` con una directiva de texto completo en el esquema GraphQL. ```graphql type _Schema_ @@ -407,7 +407,7 @@ type Band @entity { } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/developer/graphql-api#queries) for a description of the Fulltext search API and for more example usage. +El ejemplo campo `bandSearch` se puede utilizar en las consultas para filtrar las entidades `Band` con base en los documentos de texto en los campos `name`, `description`, y `bio`. Ve a [GraphQL API - Queries](/developer/graphql-api#queries) para ver una descripción de la API de búsqueda de texto completo y más ejemplos de uso. ```graphql query { @@ -420,49 +420,49 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** Desde `specVersion` `0.0.4` y en adelante, `fullTextSearch` se debe declarar bajo la sección `features` en el manifiesto del subgrafo. -### Languages supported +### Idiomas admitidos -Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". +La elección de un idioma diferente tendrá un efecto definitivo, aunque a veces sutil, en la API de búsqueda de texto completo. Los campos cubiertos por un campo de consulta de texto completo se examinan en el contexto de la lengua elegida, por lo que los lexemas producidos por las consultas de análisis y búsqueda varían de un idioma a otro. Por ejemplo: al utilizar el diccionario turco compatible, "token" se convierte en "toke", mientras que el diccionario inglés lo convierte en "token". -Supported language dictionaries: +Diccionarios de idiomas admitidos: -| Code | Dictionary | -| ------ | ---------- | -| simple | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | Portugese | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| Código | Diccionario | +| ------ | ----------- | +| simple | General | +| da | Danés | +| nl | Holandés | +| en | Inglés | +| fi | Finlandés | +| fr | Francés | +| de | Alemán | +| hu | Húngaro | +| it | Italiano | +| no | Noruego | +| pt | Portugués | +| ro | Rumano | +| ru | Ruso | +| es | Español | +| sv | Sueco | +| tr | Turco | -### Ranking Algorithms +### Algoritmos de Clasificación -Supported algorithms for ordering results: +Algoritmos admitidos para ordenar los resultados: -| Algorithm | Description | -| ------------- | ----------------------------------------------------------------------- | -| rank | Use the match quality (0-1) of the fulltext query to order the results. | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| Algoritmos | Descripción | +| ------------------- | -------------------------------------------------------------------------------------------------- | +| rango | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | +| rango de Proximidad | Similar al rango, pero también incluye la proximidad de los matches. | -## Writing Mappings +## Escribir Mapeos -The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. +Los mapeos transforman los datos de Ethereum de los que se abastecen tus mapeos en entidades definidas en tu esquema. Los mapeos se escriben en un subconjunto de [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) llamado [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) que puede ser compilado a WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript es más estricto que el TypeScript normal, pero proporciona una sintaxis familiar. -For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. +Para cada handler de eventos que se define en `subgraph.yaml` bajo `mapping.eventHandlers`, crea una función exportada del mismo nombre. Cada handler debe aceptar un único parámetro llamado `event` con un tipo correspondiente al nombre del evento que se está manejando. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +En el subgrafo de ejemplo, `src/mapping.ts` contiene handlers para los eventos `NewGravatar` y `UpdatedGravatar`: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -489,31 +489,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. +El primer handler toma un evento `NewGravatar` y crea una nueva entidad `Gravatar` con `new Gravatar(event.params.id.toHex())`, poblando los campos de la entidad usando los parámetros correspondientes del evento. Esta instancia de entidad está representada por la variable `gravatar`, con un valor de id de `event.params.id.toHex()`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on demand. The entity is then updated to match the new event parameters, before it is saved back to the store using `gravatar.save()`. +El segundo handler intenta cargar el `Gravatar` existente desde el almacén de The Graph Node. Si aún no existe, se crea bajo demanda. A continuación, la entidad se actualiza para que coincida con los nuevos parámetros del evento, antes de volver a guardarla en el almacén mediante `gravatar.save()`. -### Recommended IDs for Creating New Entities +### ID Recomendados para la Creación de Nuevas Entidades -Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`. +Cada entidad tiene que tener un `id` que sea único entre todas las entidades del mismo tipo. El valor del `id` de una entidad se establece cuando se crea la entidad. A continuación se recomiendan algunos valores de `id` a tener en cuenta a la hora de crear nuevas entidades. NOTA: El valor del `id` debe ser un `string`. - `event.params.id.toHex()` - `event.transaction.from.toHex()` - `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` -We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilies for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`. +Proporcionamos la [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) que contiene utilidades para interactuar con el almacén Graph Node y comodidades para manejar datos y entidades de contratos inteligentes. Puedes utilizar esta biblioteca en tus mapeos importando `@graphprotocol/graph-ts` in `mapping.ts`. -## Code Generation +## Generación de Código -In order to make working smart contracts, events and entities easy and type-safe, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +Para que trabajar con contratos inteligentes, eventos y entidades sea fácil y seguro desde el punto de vista de los tipos, Graph CLI puede generar tipos AssemblyScript a partir del esquema GraphQL del subgrafo y de las ABIs de los contratos incluidas en las fuentes de datos. -This is done with +Esto se hace con ```sh graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +pero en la mayoría de los casos, los subgrafos ya están preconfigurados a través de `package.json` para permitirte simplemente ejecutar uno de los siguientes para lograr lo mismo: ```sh # Yarn @@ -523,7 +523,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with +Esto generará una clase AssemblyScript para cada contrato inteligente en los archivos ABI mencionados en `subgraph.yaml`, permitiéndote vincular estos contratos a direcciones específicas en los mapeos y llamar a métodos de contrato de sólo lectura contra el bloque que se está procesando. También generará una clase para cada evento del contrato para facilitar el acceso a los parámetros del evento, así como el bloque y la transacción que originó el evento. Todos estos tipos se escriben en `//.ts`. En el subgrafo de ejemplo, esto sería `generated/Gravity/Gravity.ts`, permitiendo a los mapeos importar estos tipos con ```javascript import { @@ -535,23 +535,23 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +Además, se genera una clase para cada tipo de entidad en el esquema GraphQL del subgrafo. Estas clases proporcionan una carga de entidades segura, acceso de lectura y escritura a los campos de la entidad, así como un método `save()` para escribir entidades en el almacén. Todas las clases de entidades se escriben en `/schema.ts`, lo que permite que los mapeos los importen con ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Nota:** La generación de código debe realizarse de nuevo después de cada cambio en el esquema GraphQL o en las ABIs incluidas en el manifiesto. También debe realizarse al menos una vez antes de construir o desplegar el subgrafo. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +La generación de código no comprueba tu código de mapeo en `src/mapping.ts`. Si quieres comprobarlo antes de intentar desplegar tu subgrafo en the Graph Explorer, puedes ejecutar `yarn build` y corregir cualquier error de sintaxis que el compilador de TypeScript pueda encontrar. -## Data Source Templates +## Plantillas de Fuentes de Datos -A common pattern in Ethereum smart contracts is the use of registry or factory contracts, where one contract creates, manages or references an arbitrary number of other contracts that each have their own state and events. The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +Un patrón común en los contratos inteligentes de Ethereum es el uso de contratos de registro o fábrica, donde un contrato crea, gestiona o hace referencia a un número arbitrario de otros contratos que tienen cada uno su propio estado y eventos. Las direcciones de estos subcontratos pueden o no conocerse de antemano y muchos de estos contratos pueden crearse y/o añadirse con el tiempo. Por eso, en estos casos, es imposible definir una única fuente de datos o un número fijo de fuentes de datos y se necesita un enfoque más dinámico: _data source templates_. -### Data Source for the Main Contract +### Fuente de Datos para el Contrato Principal -First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.io) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created on chain by the factory contract. +En primer lugar, define una fuente de datos regular para el contrato principal. El siguiente fragmento muestra un ejemplo simplificado de fuente de datos para el contrato de fábrica de exchange [Uniswap](https://uniswap.io). Nota el handler `NewExchange(address,address)` del evento. Se emite cuando el contrato de fábrica crea un nuevo contrato de exchange en la cadena. ```yaml dataSources: @@ -576,9 +576,9 @@ dataSources: handler: handleNewExchange ``` -### Data Source Templates for Dynamically Created Contracts +### Plantillas de Fuentes de Datos para Contratos Creados Dinámicamente -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a predefined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +A continuación, añade _plantillas de origen de datos_ al manifiesto. Son idénticas a las fuentes de datos normales, salvo que carecen de una dirección de contrato predefinida en `source`. Normalmente, defines un modelo para cada tipo de subcontrato gestionado o referenciado por el contrato principal. ```yaml dataSources: @@ -612,9 +612,9 @@ templates: handler: handleRemoveLiquidity ``` -### Instantiating a Data Source Template +### Instanciación de una Plantilla de Fuente de Datos -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +En el último paso, actualiza la asignación del contrato principal para crear una instancia de fuente de datos dinámica a partir de una de las plantillas. En este ejemplo, cambiarías el mapeo del contrato principal para importar la plantilla `Exchange` y llamaría al método `Exchange.create(address)` en él para empezar a indexar el nuevo contrato de exchange. ```typescript import { Exchange } from '../generated/templates' @@ -626,13 +626,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> **Nota:** Un nuevo origen de datos sólo procesará las llamadas y los eventos del bloque en el que fue creado y todos los bloques siguientes, pero no procesará los datos históricos, es decir, los datos que están contenidos en bloques anteriores. > -> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. +> Si los bloques anteriores contienen datos relevantes para la nueva fuente de datos, lo mejor es indexar esos datos leyendo el estado actual del contrato y creando entidades que representen ese estado en el momento de crear la nueva fuente de datos. -### Data Source Context +### Contexto de la Fuente de Datos -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +Los contextos de fuentes de datos permiten pasar una configuración extra al instanciar una plantilla. En nuestro ejemplo, digamos que los exchanges se asocian a un par de trading concreto, que se incluye en el evento `NewExchange`. Esa información se puede pasar a la fuente de datos instanciada, así: ```typescript import { Exchange } from '../generated/templates' @@ -644,7 +644,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -Inside a mapping of the `Exchange` template, the context can then be accessed: +Dentro de un mapeo de la plantilla `Exchange`, se puede acceder al contexto: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -653,11 +653,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -There are setters and getters like `setString` and `getString` for all value types. +Hay setters y getters como `setString` and `getString` para todos los tipos de valores. -## Start Blocks +## Bloques de Inicio -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +El `startBlock` es un ajuste opcional que permite definir a partir de qué bloque de la cadena comenzará a indexar la fuente de datos. Establecer el bloque inicial permite a la fuente de datos omitir potencialmente millones de bloques que son irrelevantes. Normalmente, un desarrollador de subgrafos establecerá `startBlock` al bloque en el que se creó el contrato inteligente de la fuente de datos. ```yaml dataSources: @@ -683,23 +683,23 @@ dataSources: handler: handleNewEvent ``` -> **Note:** The contract creation block can be quickly looked up on Etherscan: +> **Nota:** El bloque de creación del contrato se puede buscar rápidamente en Etherscan: > -> 1. Search for the contract by entering its address in the search bar. -> 2. Click on the creation transaction hash in the `Contract Creator` section. -> 3. Load the transaction details page where you'll find the start block for that contract. +> 1. Busca el contrato introduciendo su dirección en la barra de búsqueda. +> 2. Haz clic en el hash de la transacción de creación en la sección `Contract Creator`. +> 3. Carga la página de detalles de la transacción, donde encontrarás el bloque inicial de ese contrato. -## Call Handlers +## Handlers de Llamadas -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +Aunque los eventos proporcionan una forma eficaz de recoger los cambios relevantes en el estado de un contrato, muchos contratos evitan generar registros para optimizar los costos de gas. En estos casos, un subgrafo puede suscribirse a las llamadas realizadas al contrato de la fuente de datos. Esto se consigue definiendo los handlers de llamadas que hacen referencia a la firma de la función y al handler de mapeo que procesará las llamadas a esta función. Para procesar estas llamadas, el manejador de mapeo recibirá un `ethereum.Call` como argumento con las entradas y salidas tipificadas de la llamada. Las llamadas realizadas en cualquier profundidad de la cadena de llamadas de una transacción activarán el mapeo, permitiendo capturar la actividad con el contrato de origen de datos a través de los contratos proxy. -Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. +Los handlers de llamadas sólo se activarán en uno de estos dos casos: cuando la función especificada sea llamada por una cuenta distinta del propio contrato o cuando esté marcada como externa en Solidity y sea llamada como parte de otra función en el mismo contrato. -> **Note:** Call handlers are not supported on Rinkeby, Goerli or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it. +> **Nota:**Los handlers de llamadas no son compatibles con Rinkeby, Goerli o Ganache. Los handlers de llamadas dependen actualmente de la API de rastreo de Parity y estas redes no la admiten. -### Defining a Call Handler +### Definición de un Handler de Llamadas -To define a call handler in your manifest simply add a `callHandlers` array under the data source you would like to subscribe to. +Para definir un handler de llamadas en su manifiesto simplemente añade una array `callHandlers` bajo la fuente de datos a la que deseas suscribirte. ```yaml dataSources: @@ -724,11 +724,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +La `función` es la firma de la función normalizada por la que se filtran las llamadas. La propiedad `handler` es el nombre de la función de tu mapeo que quieres ejecutar cuando se llame a la función de destino en el contrato de origen de datos. -### Mapping Function +### Función Mapeo -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Cada handler de llamadas toma un solo parámetro que tiene un tipo correspondiente al nombre de la función llamada. En el subgrafo de ejemplo anterior, el mapeo contiene un handler para cuando la función `createGravatar` es llamada y recibe un parámetro `CreateGravatarCall` como argumento: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -743,22 +743,22 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +La función `handleCreateGravatar` toma una nueva `CreateGravatarCall` que es una subclase de `ethereum.Call`, proporcionada por `@graphprotocol/graph-ts`, que incluye las entradas y salidas tipificadas de la llamada. El tipo `CreateGravatarCall` se genera por ti cuando ejecutas `graph codegen`. -## Block Handlers +## Handlers de Bloques -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a predefined filter. +Además de suscribirse a eventos del contracto o llamadas a funciones, un subgrafo puede querer actualizar sus datos a medida que se añaden nuevos bloques a la cadena. Para ello, un subgrafo puede ejecutar una función después de cada bloque o después de los bloques que coincidan con un filtro predefinido. -### Supported Filters +### Filtros Admitidos ```yaml filter: kind: call ``` -_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ +_El handler definido será llamado una vez por cada bloque que contenga una llamada al contrato (fuente de datos) bajo el cual está definido el handler._ -The absense of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. +La ausencia de un filtro para un handler de bloque asegurará que el handler sea llamado en cada bloque. Una fuente de datos sólo puede contener un handler de bloque para cada tipo de filtro. ```yaml dataSources: @@ -785,9 +785,9 @@ dataSources: kind: call ``` -### Mapping Function +### Función de Mapeo -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +La función de mapeo recibirá un `ethereum.Block` como único argumento. Al igual que las funciones de mapeo de eventos, esta función puede acceder a las entidades del subgrafo existentes en el almacén, llamar a los contratos inteligentes y crear o actualizar entidades. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -799,9 +799,9 @@ export function handleBlock(block: ethereum.Block): void { } ``` -## Anonymous Events +## Eventos Anónimos -If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: +Si necesitas procesar eventos anónimos en Solidity, puedes hacerlo proporcionando el tema 0 del evento, como en el ejemplo: ```yaml eventHandlers: @@ -810,20 +810,20 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +Un evento sólo se activará cuando la firma y el tema 0 coincidan. Por defecto, `topic0` es igual al hash de la firma del evento. -## Experimental features +## Características experimentales -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Las características del subgrafo que parten de `specVersion` `0.0.4` deben declararse explícitamente en la sección `features` del nivel superior del archivo del manifiesto, utilizando su nombre `camelCase`, como se indica en la tabla siguiente: -| Feature | Name | +| Característica | Nombre | | --------------------------------------------------------- | ------------------------- | | [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | | [IPFS on Ethereum Contracts](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +Por ejemplo, si un subgrafo utiliza las características **Full-Text Search** y **Non-fatal Errors**, el campo `features` del manifiesto debería ser: ```yaml specVersion: 0.0.4 @@ -834,27 +834,27 @@ features: dataSources: ... ``` -Note that using a feature without declaring it will incur in a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +Ten en cuenta que el uso de una característica sin declararla incurrirá en un **error de validación** durante el despliegue del subgrafo, pero no se producirá ningún error si se declara una característica pero no se utiliza. -### IPFS on Ethereum Contracts +### IPFS en Contratos de Ethereum -A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on chain, and reference the IPFS hash in Ethereum contracts. +Un caso de uso común para combinar IPFS con Ethereum es almacenar datos en IPFS que serían demasiado costosos de mantener en la cadena, y hacer referencia al hash de IPFS en los contratos de Ethereum. -Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, however, it is required that these files are pinned on the IPFS node that the Graph Node indexing the subgraph connects to. In the case of the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). +Dados estos hashes de IPFS, los subgrafos pueden leer los archivos correspondientes desde IPFS utilizando `ipfs.cat` y `ipfs.map`. Sin embargo, para hacer esto de forma fiable, es necesario que estos archivos estén anclados en el nodo IPFS al que se conecta the Graph Node que indexa el subgrafo. En el caso del [hosted service](https://thegraph.com/hosted-service), es [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs/). -> **Note:** The Graph Network does not yet support `ipfs.cat` and `ipfs.map`, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Nota:** The Graph Network todavía no admite `ipfs.cat` y `ipfs.map`, y los desarrolladores no deben desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. -In order to make this easy for subgraph developers, The Graph team wrote a tool for transfering files from one IPFS node to another, called [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). +Para facilitar esto a los desarrolladores de subgrafos, el equipo de The Graph escribió una herramienta para transferir archivos de un nodo IPFS a otro, llamada [ipfs-sync](https://github.com/graphprotocol/ipfs-sync). -> **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. +> **[La Gestión de Funciones](#experimental-features):** `ipfsOnEthereumContracts` debe declararse en `funciones` en el manifiesto del subgrafo. -### Non-fatal errors +### Errores no fatales -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results will possibly be inconsistent due to the bug that caused the error. Note that some errors are still always fatal, to be non-fatal the error must be known to be deterministic. +Los errores de indexación en subgrafos ya sincronizados harán que, por defecto, el subgrafo falle y deje de sincronizarse. Los subgrafos pueden ser configurados alternativamente para continuar la sincronización en presencia de errores, ignorando los cambios realizados por el handler que provocó el error. Esto da a los autores de subgrafos tiempo para corregir sus subgrafos mientras las consultas siguen siendo servidas contra el último bloque, aunque los resultados serán posiblemente inconsistentes debido al fallo que causó el error. Ten en cuenta que algunos errores siguen siendo siempre fatales, para que el error no sea fatal debe saberse que es determinista. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Nota:** The Graph Network todavía no admite errores no fatales, y los desarrolladores no deben desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +La activación de los errores no fatales requiere el establecimiento de la siguiente bandera de características en el manifiesto del subgrafo: ```yaml specVersion: 0.0.4 @@ -864,7 +864,7 @@ features: ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +La consulta también debe optar por consultar datos con posibles inconsistencias a través del argumento `subgraphError`. También se recomienda consultar `_meta` para comprobar si el subgrafo ha saltado los errores, como en el ejemplo: ```graphql foos(first: 100, subgraphError: allow) { @@ -876,7 +876,7 @@ _meta { } ``` -If the subgraph encounters an error that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +Si el subgrafo encuentra un error esa consulta devolverá tanto los datos como un error de graphql con el mensaje `"indexing_error"`, como en este ejemplo de respuesta: ```graphql "data": { @@ -896,13 +896,13 @@ If the subgraph encounters an error that query will return both the data and a g ] ``` -### Grafting onto Existing Subgraphs +### Grafting en Subgrafos Existentes -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances, it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly, or to temporarily get an existing subgraph working again after it has failed. +Cuando un subgrafo se despliega por primera vez, comienza a indexar eventos en el bloque génesis de la cadena correspondiente (o en el `startBlock` definido con cada fuente de datos) En algunas circunstancias, es beneficioso reutilizar los datos de un subgrafo existente y comenzar a indexar en un bloque mucho más tarde. Este modo de indexación se denomina _Grafting_. El grafting es, por ejemplo, útil durante el desarrollo para superar rápidamente errores simples en los mapeos, o para hacer funcionar temporalmente un subgrafo existente después de que haya fallado. -> **Note:** Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Nota:** El grafting requiere que el indexador haya indexado el subgrafo base. No se recomienda en The Graph Network en este momento, y los desarrolladores no deberían desplegar subgrafos que utilicen esa funcionalidad en la red a través de Studio. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the toplevel: +Un subgrafo se injerta en un subgrafo base cuando el manifiesto del subgrafo en `subgraph.yaml` contiene un bloque `graft` en el nivel superior: ```yaml description: ... @@ -911,18 +911,18 @@ graft: block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +Cuando se despliega un subgrafo cuyo manifiesto contiene un bloque `graft`, Graph Node copiará los datos del subgrafo `base` hasta e incluyendo el `block` dado y luego continuará indexando el nuevo subgrafo a partir de ese bloque. El subgrafo base debe existir en el target de Graph Node de destino y debe haber indexado hasta al menos el bloque dado. Debido a esta restricción, el grafting sólo debería utilizarse durante el desarrollo o durante una emergencia para acelerar la producción de un subgrafo equivalente no grafted. -Because grafting copies rather than indexes base data it is much quicker in getting the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Dado que el grafting copia en lugar de indexar los datos de base, es mucho más rápido llevar el subgrafo al bloque deseado que indexar desde cero, aunque la copia inicial de los datos puede tardar varias horas en el caso de subgrafos muy grandes. Mientras se inicializa el subgrafo grafteado, the Graph Node registrará información sobre los tipos de entidad que ya han sido copiados. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right but may deviate from the base subgraph's schema in the following ways: +El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al del subgrafo base, sino simplemente compatible con él. Tiene que ser un esquema de subgrafo válido por sí mismo, pero puede desviarse del esquema del subgrafo base de las siguientes maneras: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Agrega o elimina tipos de entidades +- Elimina los atributos de los tipos de entidad +- Agrega atributos anulables a los tipos de entidad +- Convierte los atributos no anulables en atributos anulables +- Añade valores a los enums +- Agrega o elimina interfaces +- Cambia para qué tipos de entidades se implementa una interfaz -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[La gestión de características](#experimental-features):** `grafting` se declara en `features` en el manifiesto del subgrafo. From fcad7dc8856c78b695277e88e83bc404505bea7f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:52 -0500 Subject: [PATCH 143/241] New translations assemblyscript-api.mdx (Japanese) --- pages/ja/developer/assemblyscript-api.mdx | 300 +++++++++++----------- 1 file changed, 150 insertions(+), 150 deletions(-) diff --git a/pages/ja/developer/assemblyscript-api.mdx b/pages/ja/developer/assemblyscript-api.mdx index 2afa431fe8c5..4826602d3457 100644 --- a/pages/ja/developer/assemblyscript-api.mdx +++ b/pages/ja/developer/assemblyscript-api.mdx @@ -2,75 +2,75 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) +> Note: `graph-cli`/`graph-ts` version `0.22.0`より前にサブグラフを作成した場合、古いバージョンの AssemblyScript を使用しているので、[`Migration Guide`](/developer/assemblyscript-migration-guide)を参照することをお勧めします。 -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +このページでは、サブグラフのマッピングを記述する際に、どのような組み込み API を使用できるかを説明します。 すぐに使える API は 2 種類あります: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- [Graph TypeScript ライブラリ](https://github.com/graphprotocol/graph-ts) (`graph-ts`)と +- `graph codegen`によってサブグラフファイルから生成されたコードです。 -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +また、[AssemblyScript](https://github.com/AssemblyScript/assemblyscript)との互換性があれば、他のライブラリを依存関係に追加することも可能です。 マッピングはこの言語で書かれているので、言語や標準ライブラリの機能については、 [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki)が参考になります。 -## Installation +## インストール -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: +[`graph init`](/developer/create-subgraph-hosted)で作成されたサブグラフには、あらかじめ設定された依存関係があります。 これらの依存関係をインストールするために必要なのは、以下のコマンドのいずれかを実行することです: ```sh yarn install # Yarn -npm install # NPM +npm install # NPM ``` -If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: +サブグラフが最初から作成されている場合は、次の 2 つのコマンドのいずれかを実行すると、Graph TypeScript ライブラリが依存関係としてインストールされます: ```sh -yarn add --dev @graphprotocol/graph-ts # Yarn -npm install --save-dev @graphprotocol/graph-ts # NPM +yarn add --dev @graphprotocol/graph-ts # Yarn +npm install -save-dev @graphprotocol/graph-ts # NPM ``` -## API Reference +## API リファレンス -The `@graphprotocol/graph-ts` library provides the following APIs: +`@graphprotocol/graph-ts`ライブラリは、以下の API を提供しています: -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and the Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. -- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. +- Ethereum スマートコントラクト、イベント、ブロック、トランザクション、Ethereum の値を扱うための`ethereum`API +- エンティティをグラフノードのストアからロードしたり、ストアに保存したりする`store`API +- Graph Node の出力や Graph Explorer にメッセージを記録するための`log`API です +- IPFS からファイルをロードする`ipfs`API +- JSON データを解析するための`json`API +- 暗号機能を使用するための`crypto`API +- Ethereum、JSON、GraphQL、AssemblyScript など、異なるタイプのシステム間で変換するための低レベルプリミティブ -### Versions +### バージョン -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. +サブグラフマニフェストの`apiVersion` は、指定されたサブグラフに対してグラフノードが実行するマッピング API のバージョンを指定します。 現在のマッピング API のバージョンは 0.0.6 です。 -| Version | Release notes | -|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| バージョン | リリースノート | +|:-----:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | Ethereum Transaction オブジェクトに`nonce`フィールドを追加 イーサリアムブロックオブジェクトに
Added `baseFeePerGas`を追加 | +| 0.0.5 | AssemblyScript がバージョン 0.19.10 にアップグレード(変更点がありますので[`Migration Guide`](/developer/assemblyscript-migration-guide))をご覧ください)。
`ethereum.transaction.gasUsed`の名前が`ethereum.transaction.gasLimit`に変更 | +| 0.0.4 | Ethereum SmartContractCall オブジェクトに`functionSignature`フィールドを追加 | +| 0.0.3 | Ethereum Call オブジェクトに`from`フィールドを追加
`etherem.call.address`の名前を `ethereum.call.to`に変更 | +| 0.0.2 | Ethereum Transaction オブジェクトに `input`フィールドを追加 | -### Built-in Types +### 組み込み型 -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). +AssemblyScript に組み込まれている基本型のドキュメントは[AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types)にあります。 -The following additional types are provided by `@graphprotocol/graph-ts`. +以下の追加型は`@graphprotocol/graph-ts`で提供されています。 -#### ByteArray +#### バイト配列 ```typescript -import { ByteArray } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から{ ByteArray } をインポートします。 ``` -`ByteArray` represents an array of `u8`. +`ByteArray`は、`u8`の配列を表します。 -_Construction_ +_構造_ - `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. - `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. -_Type conversions_ +_型変換_ - `toHexString(): string` - Converts to a hex string prefixed with `0x`. - `toString(): string` - Interprets the bytes as a UTF-8 string. @@ -78,66 +78,66 @@ _Type conversions_ - `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. - `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. -_Operators_ +_オペレーター_ - `equals(y: ByteArray): bool` – can be written as `x == y`. #### BigDecimal ```typescript -import { BigDecimal } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { BigDecimal } をインポートします。 ``` -`BigDecimal` is used to represent arbitrary precision decimals. +`BigDecimal`は、任意の精度の小数を表現するために使用されます。 -_Construction_ +_構造_ - `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. - `static fromString(s: string): BigDecimal` – parses from a decimal string. -_Type conversions_ +_型変換_ - `toString(): string` – prints to a decimal string. -_Math_ - -- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. -- `le(y: BigDecimal): bool` – can be written as `x <= y`. -- `gt(y: BigDecimal): bool` – can be written as `x > y`. -- `ge(y: BigDecimal): bool` – can be written as `x >= y`. +_数学_ + +- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y` +- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y` +- `times(y: BigDecimal): BigDecimal` – can be written as `x * y` +- `div(y: BigDecimal): BigDecimal` – can be written as `x / y` +- `equals(y: BigDecimal): bool` – can be written as `x == y` +- `notEqual(y: BigDecimal): bool` – can be written as `x != y` +- `lt(y: BigDecimal): bool` – can be written as `x < y` +- `le(y: BigDecimal): bool` – can be written as `x <= y` +- `gt(y: BigDecimal): bool` – can be written as `x > y` +- `ge(y: BigDecimal): bool` – can be written as `x >= y` - `neg(): BigDecimal` - can be written as `-x`. #### BigInt ```typescript -import { BigInt } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { BigInt } をインポートします。 ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +`BigInt`は大きな整数を表すのに使われます。 これには、Ethereum の`uint32`~`uint256` 、`int64` ~`int256`の値が含まれます。 `uint32`、`int32`、`uint24`、`int8`以下のものはすべて`i32`で表されます。 -The `BigInt` class has the following API: +`BigInt`クラスの API は以下の通りです。 -_Construction_ +_構造_ -- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32` +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. If your input is big-endian, call `.reverse()` first. - `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. - _Type conversions_ + _型変換_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. - `x.toString(): string` – turns `BigInt` into a decimal number string. - `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. - `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. -_Math_ +_数学_ - `x.plus(y: BigInt): BigInt` – can be written as `x + y`. - `x.minus(y: BigInt): BigInt` – can be written as `x - y`. @@ -164,12 +164,12 @@ _Math_ #### TypedMap ```typescript -import { TypedMap } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { TypedMap } をインポートします。 ``` -`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap` はキーと値のペアを格納するために使用することができます。 [この例](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51)を参照してください。 -The `TypedMap` class has the following API: +TypedMap クラスは以下のような API を持っています。 - `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` - `map.set(key: K, value: V): void` – sets the value of `key` to `value` @@ -180,12 +180,12 @@ The `TypedMap` class has the following API: #### Bytes ```typescript -import { Bytes } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { Bytes } をインポートします。 ``` -`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. +`Bytes` は、任意の長さの bytes 配列を表すために使用されます。 これには、Ethereum の `bytes`、`bytes32` などの型の値が含まれます。 -The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: +`Bytes`クラスは AssemblyScript の[Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64)を継承しており、`Uint8Array` のすべての機能に加えて、以下の新しいメソッドをサポートしています。 - `b.toHex()` – returns a hexadecimal string representing the bytes in the array - `b.toString()` – converts the bytes in the array to a string of unicode characters @@ -194,28 +194,28 @@ The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/Assem #### Address ```typescript -import { Address } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { Address } をインポートします。 ``` -`Address` extends `Bytes` to represent Ethereum `address` values. +`Address`は Ethereum の`address`値を表現するために`Bytes`を拡張しています。 -It adds the following method on top of the `Bytes` API: +`Bytes`の API の上に以下のメソッドを追加しています。 - `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string ### Store API ```typescript -import { store } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { store } をインポートします。 ``` -The `store` API allows to load, save and remove entities from and to the Graph Node store. +`store` API は、グラフノードのストアにエンティティを読み込んだり、保存したり、削除したりすることができます。 -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +ストアに書き込まれたエンティティは、サブグラフの GraphQL スキーマで定義された`@entity`タイプに一対一でマッピングされます。 これらのエンティティの扱いを便利にするために、[Graph CLI](https://github.com/graphprotocol/graph-cli)で提供される `graph codegen` コマンドは、組み込みの`Entity`型のサブクラスであるエンティティ・クラスを生成します。 -#### Creating entities +#### エンティティの作成 -The following is a common pattern for creating entities from Ethereum events. +Ethereum のイベントからエンティティを作成する際の一般的なパターンを以下に示します。 ```typescript // Import the Transfer event class generated from the ERC20 ABI @@ -241,13 +241,13 @@ export function handleTransfer(event: TransferEvent): void { } ``` -When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. +チェーンの処理中に`Transfer` イベントが発生すると、生成された`Transfer`タイプ(ここではエンティティタイプとの名前の衝突を避けるために`TransferEvent`とエイリアスされています)を使用して、`handleTransfer`イベントハンドラに渡されます。 このタイプでは、イベントの親トランザクションやそのパラメータなどのデータにアクセスすることができます。 -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +各エンティティは、他のエンティティとの衝突を避けるために、ユニークな ID を持たなければなりません。 イベントのパラメータには、使用可能な一意の識別子が含まれているのが一般的です。 注:トランザクションのハッシュを ID として使用することは、同じトランザクション内の他のイベントがこのハッシュを ID としてエンティティを作成しないことを前提としています。 -#### Loading entities from the store +#### ストアからのエンティティの読み込み -If an entity already exists, it can be loaded from the store with the following: +エンティティがすでに存在する場合、以下の方法でストアからロードすることができます。 ```typescript let id = event.transaction.hash.toHex() // or however the ID is constructed @@ -259,18 +259,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +エンティティはまだストアに存在していない可能性があるため、`load`メソッドは`Transfer | null`型の値を返します。 そのため、値を使用する前に、`null`のケースをチェックする必要があるかもしれません。 -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> **Note:** エンティティのロードは、マッピングでの変更がエンティティの以前のデータに依存する場合にのみ必要です。 既存のエンティティを更新する 2 つの方法については、次のセクションを参照してください。 -#### Updating existing entities +#### 既存のエンティティの更新 -There are two ways to update an existing entity: +既存のエンティティを更新するには 2 つの方法があります。 -1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +1. `Transfer.load(id)`などでエンティティをロードし、エンティティにプロパティを設定した後、`.save()`でストアに戻す。 +2. 単純に`new Transfer(id)`でエンティティを作成し、エンティティにプロパティを設定し、ストアに `.save()`します。 エンティティがすでに存在する場合は、変更内容がマージされます。 -Changing properties is straight forward in most cases, thanks to the generated property setters: +プロパティの変更は、生成されたプロパティセッターのおかげで、ほとんどの場合、簡単です。 ```typescript let transfer = new Transfer(id) @@ -279,16 +279,16 @@ transfer.to = ... transfer.amount = ... ``` -It is also possible to unset properties with one of the following two instructions: +また、次の 2 つの命令のいずれかで、プロパティの設定を解除することも可能です。 ```typescript transfer.from.unset() transfer.from = null ``` -This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. +これは、オプションのプロパティ、つまり GraphQL で`!`を付けずに宣言されているプロパティでのみ機能します。 例としては、`owner: Bytes`や`amount: BigInt`です。 -Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. +エンティティから配列を取得すると、その配列のコピーが作成されるため、配列のプロパティの更新には少し手間がかかります。 つまり、配列を変更した後は、明示的に配列のプロパティを設定し直す必要があります。 次の例では、`entity` が `numbers: [BigInt!]!` を持っていると仮定します。 ```typescript // This won't work @@ -302,9 +302,9 @@ entity.numbers = numbers entity.save() ``` -#### Removing entities from the store +#### ストアからのエンティティの削除 -There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: +現在、生成された型を使ってエンティティを削除する方法はありません。 代わりに、エンティティを削除するには、エンティティタイプの名前とエンティティ ID を`store.remove`に渡す必要があります。 ```typescript import { store } from '@graphprotocol/graph-ts' @@ -315,15 +315,15 @@ store.remove('Transfer', id) ### Ethereum API -The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. +Ethereum API は、スマートコントラクト、パブリックステート変数、コントラクト関数、イベント、トランザクション、ブロック、および Ethereum データのエンコード/デコードへのアクセスを提供します。 -#### Support for Ethereum Types +#### Ethereum タイプのサポート -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +エンティティと同様に、`graph codegen`は、サブグラフで使用されるすべてのスマートコントラクトとイベントのためのクラスを生成します。 このためには、コントラクト ABI がサブグラフマニフェストのデータソースの一部である必要があります。 通常、ABI ファイルは`abis/`フォルダに格納されています。 -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +生成されたクラスでは、Ethereum 型と [組み込み型](#built-in-types)の間の変換が背後で行われるため、サブグラフの作成者はそれらを気にする必要がありません。 -The following example illustrates this. Given a subgraph schema like +以下の例で説明します。 以下のようなサブグラフのスキーマが与えられます。 ```graphql type Transfer @entity { @@ -344,9 +344,9 @@ transfer.amount = event.params.amount transfer.save() ``` -#### Events and Block/Transaction Data +#### イベントとブロック/トランザクションデータ -Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): +前述の例の`Transfer`イベントのように、イベントハンドラに渡された Ethereum イベントは、イベントパラメータへのアクセスだけでなく、その親となるトランザクションや、それらが属するブロックへのアクセスも提供します。 `event` インスタンスからは、以下のデータを取得することができます(これらのクラスは、 `graph-ts`の`ethereum`モジュールの一部です)。 ```typescript class Event { @@ -390,11 +390,11 @@ class Transaction { } ``` -#### Access to Smart Contract State +#### スマートコントラクトの状態へのアクセス -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +`graph codegen`が生成するコードには、サブグラフで使用されるスマートコントラクトのクラスも含まれています。 これらを使って、パブリックな状態変数にアクセスしたり、現在のブロックにあるコントラクトの関数を呼び出したりすることができます。 -A common pattern is to access the contract from which an event originates. This is achieved with the following code: +よくあるパターンは、イベントが発生したコントラクトにアクセスすることです。 これは以下のコードで実現できます。 ```typescript // Import the generated contract class @@ -411,13 +411,13 @@ export function handleTransfer(event: Transfer) { } ``` -As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. +Ethereum の `ERC20Contract`に`symbol`というパブリックな読み取り専用の関数があれば、`.symbol()`で呼び出すことができます。 パブリックな状態変数については、同じ名前のメソッドが自動的に作成されます。 -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +サブグラフの一部である他のコントラクトは、生成されたコードからインポートすることができ、有効なアドレスにバインドすることができます。 -#### Handling Reverted Calls +#### リバートされた呼び出しの処理 -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +コントラクトの読み取り専用メソッドが復帰する可能性がある場合は、`try_`を前置して生成されたコントラクトメソッドを呼び出すことで対処しなければなりません。 例えば、Gravity コントラクトでは`gravatarToOwner`メソッドを公開しています。 このコードでは、そのメソッドの復帰を処理することができます。 ```typescript let gravity = Gravity.bind(event.address) @@ -429,11 +429,11 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +ただし、Geth や Infura のクライアントに接続された Graph ノードでは、すべてのリバートを検出できない場合があるので、これに依存する場合は Parity のクライアントに接続された Graph ノードを使用することをお勧めします。 -#### Encoding/Decoding ABI +#### 符号化/復号化 ABI -Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. +`ethereum`モジュールの`encode`/ `decode`関数を使用して、Ethereum の ABI エンコーディングフォーマットに従ってデータをエンコード/デコードすることができます。 ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -450,7 +450,7 @@ let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` -For more information: +その他の情報: - [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) - Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) @@ -459,12 +459,12 @@ For more information: ### Logging API ```typescript -import { log } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から{ log } をインポートします。 ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +`log` API は、サブグラフがグラフノードの標準出力やグラフエクスプローラに情報を記録するためのものです。 メッセージは、異なるログレベルを使って記録することができます。 基本的なフォーマット文字列の構文が提供されており、引数からログメッセージを構成することができます。 -The `log` API includes the following functions: +`log` API には以下の機能があります: - `log.debug(fmt: string, args: Array): void` - logs a debug message. - `log.info(fmt: string, args: Array): void` - logs an informational message. @@ -472,17 +472,17 @@ The `log` API includes the following functions: - `log.error(fmt: string, args: Array): void` - logs an error message. - `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. -The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. +`log` API は、フォーマット文字列と文字列値の配列を受け取ります。 そして、プレースホルダーを配列の文字列値で置き換えます。 最初の`{}`プレースホルダーは配列の最初の値に置き換えられ、2 番目の`{}`プレースホルダーは 2 番目の値に置き換えられ、以下のようになります。 ```typescript -log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) +log.info('表示されるメッセージ。{}, {}, {}', [value.toString(), anotherValue.toString(), 'すでに文字列']) ``` -#### Logging one or more values +#### 1 つまたは複数の値を記録する -##### Logging a single value +##### 1 つの値を記録する -In the example below, the string value "A" is passed into an array to become`['A']` before being logged: +以下の例では、文字列値 "A" を配列に渡して`['A']` にしてからログに記録しています。 ```typescript let myValue = 'A' @@ -493,9 +493,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Logging a single entry from an existing array +##### 既存の配列から 1 つのエントリをロギングする -In the example below, only the first value of the argument array is logged, despite the array containing three values. +以下の例では、配列に 3 つの値が含まれているにもかかわらず、引数の配列の最初の値だけがログに記録されます。 ```typescript let myArray = ['A', 'B', 'C'] @@ -506,9 +506,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -#### Logging multiple entries from an existing array +#### 既存の配列から複数のエントリを記録する -Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. +引数配列の各エントリは、ログメッセージ文字列に独自のプレースホルダー`{}`を必要とします。 以下の例では、ログメッセージに 3 つのプレースホルダー`{}`が含まれています。 このため、`myArray`の 3 つの値すべてがログに記録されます。 ```typescript let myArray = ['A', 'B', 'C'] @@ -519,9 +519,9 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Logging a specific entry from an existing array +##### 既存の配列から特定のエントリをロギングする -To display a specific value in the array, the indexed value must be provided. +配列内の特定の値を表示するには、インデックス化された値を指定する必要があります。 ```typescript export function handleSomeEvent(event: SomeEvent): void { @@ -530,12 +530,12 @@ export function handleSomeEvent(event: SomeEvent): void { } ``` -##### Logging event information +##### イベント情報の記録 -The example below logs the block number, block hash and transaction hash from an event: +以下の例では、イベントからブロック番号、ブロックハッシュ、トランザクションハッシュをログに記録しています。 ```typescript -import { log } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { log } をインポートします。 export function handleSomeEvent(event: SomeEvent): void { log.debug('Block number: {}, block hash: {}, transaction hash: {}', [ @@ -549,12 +549,12 @@ export function handleSomeEvent(event: SomeEvent): void { ### IPFS API ```typescript -import { ipfs } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { ipfs } をインポートします。 ``` -Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +スマートコントラクトは時折、チェーン上の IPFS ファイルをアンカリングします。 これにより、マッピングはコントラクトから IPFS ハッシュを取得し、IPFS から対応するファイルを読み取ることができます。 ファイルのデータは`Bytes`として返されますが、通常は、このページで後述する `json` API などを使ってさらに処理する必要があります。 -Given an IPFS hash or path, reading a file from IPFS is done as follows: +IPFS のハッシュやパスが与えられた場合、IPFS からのファイルの読み込みは以下のように行われます。 ```typescript // Put this inside an event handler in the mapping @@ -567,9 +567,9 @@ let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. To ensure that files can be retrieved, they have to be pinned to the IPFS node that Graph Node connects to. On the [hosted service](https://thegraph.com/hosted-service), this is [https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs). See the [IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) section for more information. +**注意:** `ipfs.cat` は現時点では決定論的ではありません。 このため、結果に`null`が含まれていないかどうかを常にチェックする必要があります。 リクエストがタイムアウトする前に、Ipfs ネットワーク上でファイルを取得できない場合は、`null`が返されます。 ファイルを確実に取得するためには、グラフノードが接続する IPFS ノードにファイルを固定する必要があります。 [hosted service](https://thegraph.com/hosted-service)では、[https://api.thegraph.com/ipfs/](https://api.thegraph.com/ipfs)です。 詳細は、[IPFS pinning](/developer/create-subgraph-hosted#ipfs-pinning) のセクションを参照してください。 -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +また、`ipfs.map`.を使って、大きなファイルをストリーミングで処理することも可能です。 この関数は、IPFS ファイルのハッシュまたはパス、コールバックの名前、そして動作を変更するためのフラグを受け取ります。 ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' @@ -599,34 +599,34 @@ ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +現在サポートされている唯一のフラグは`json`で、これを`ipfs.map`に渡さなければなりません。 `json`フラグを使用すると、IPFS ファイルは一連の JSON 値で構成され、1 行に 1 つの値が必要です。 `ipfs.map`への呼び出しは、ファイルの各行を読み込み、`JSONValue`にデシリアライズし、それぞれのコールバックを呼び出します。 コールバックは、エンティティ・オペレーションを使って、`JSONValue`からデータを保存することができます。 エンティティの変更は、`ipfs.map`を呼び出したハンドラが正常に終了したときにのみ保存されます。それまでの間は、メモリ上に保持されるため、`ipfs.map`が処理できるファイルのサイズは制限されます。 -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +成功すると,`ipfs.map`は `void`を返します。 コールバックの呼び出しでエラーが発生した場合、`ipfs.map`を呼び出したハンドラは中止され、サブグラフは失敗とマークされます。 ### Crypto API ```typescript -import { crypto } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から { crypto } をインポートします。 ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: +`crypto` API は、マッピングで使用できる暗号化関数を提供します。 今のところ、1 つしかありません。 - `crypto.keccak256(input: ByteArray): ByteArray` ### JSON API ```typescript -import { json, JSONValueKind } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'から{ json, JSONValueKind } をインポートします。 ``` -JSON data can be parsed using the `json` API: +JSON データは、`json` API を使って解析することができます。 -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array - `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +`JSONValue` クラスは、任意の JSON ドキュメントから値を引き出す方法を提供します。 JSON の値には、ブーリアン、数値、配列などがあるため、`JSONValue`には、値の種類をチェックするための`kind`プロパティが付属しています。 ```typescript let value = json.fromBytes(...) @@ -635,11 +635,11 @@ if (value.kind == JSONValueKind.BOOL) { } ``` -In addition, there is a method to check if the value is `null`: +さらに、値が`null`かどうかをチェックするメソッドもあります: - `value.isNull(): boolean` -When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: +値の型が確定している場合は,以下のいずれかの方法で[組み込み型](#built-in-types)に変換することができます。 - `value.toBool(): boolean` - `value.toI64(): i64` @@ -648,7 +648,7 @@ When the type of a value is certain, it can be converted to a [built-in type](#b - `value.toString(): string` - `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) -### Type Conversions Reference +### タイプ 変換参照 | Source(s) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | @@ -688,17 +688,17 @@ When the type of a value is certain, it can be converted to a [built-in type](#b | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### Data Source Metadata +### データソースのメタデータ -You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: +ハンドラを起動した`データソース`のコントラクトアドレス、ネットワーク、コンテキストは、以下のようにして調べることができます。 - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### Entity and DataSourceContext +### エンティティと DataSourceContext -The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: +ベースとなる`エンティティ`クラスと子クラスの`DataSourceContext`クラスには、フィールドを動的に設定・取得するためのヘルパーが用意されています。 - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From 5435dcdfdbcb47c6c05027398e3daa9e33c81fa8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:53 -0500 Subject: [PATCH 144/241] New translations assemblyscript-api.mdx (Korean) --- pages/ko/developer/assemblyscript-api.mdx | 300 +++++++++++----------- 1 file changed, 150 insertions(+), 150 deletions(-) diff --git a/pages/ko/developer/assemblyscript-api.mdx b/pages/ko/developer/assemblyscript-api.mdx index 2afa431fe8c5..1940e1e7f916 100644 --- a/pages/ko/developer/assemblyscript-api.mdx +++ b/pages/ko/developer/assemblyscript-api.mdx @@ -2,220 +2,220 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/developer/assemblyscript-migration-guide) +> 참고: 만약 `graph-cli`/`graph-ts` 버전 `0.22.0` 이전의 서브그래프를 생성하는 경우, 이전 버젼의 AssemblyScript를 사용중인 경우, [`Migration Guide`](/developer/assemblyscript-migration-guide)를 참고하시길 권장드립니다. -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +이 페이지는 서브그래프 매핑을 작성할 때 사용할 수 있는 내장 API를 설명합니다. 다음 두 가지 종류의 API를 즉시 사용할 수 있습니다 : -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) 그리고 +- `graph codegen`에 의해 서브그래프 파일들에서 생성된 코드 -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +[AssemblyScript](https://github.com/AssemblyScript/assemblyscript)와 호환되는 한 다른 라이브러리들을 의존성(dependencies)으로서 추가할 수도 있습니다. 이것은 언어 매핑이 작성되기 때문에 [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) 위키는 언어 및 표준 라이브러리 기능과 관련한 좋은 소스입니다. -## Installation +## 설치 -Subgraphs created with [`graph init`](/developer/create-subgraph-hosted) come with preconfigured dependencies. All that is required to install these dependencies is to run one of the following commands: +[`graph init`](/developer/create-subgraph-hosted)로 생성된 서브그래프는 미리 구성된 의존성들(dependencies)을 함께 동반합니다. 이러한 의존성들을 설치하려면 다음 명령 중 하나를 실행해야 합니다. ```sh yarn install # Yarn npm install # NPM ``` -If the subgraph was created from scratch, one of the following two commands will install the Graph TypeScript library as a dependency: +서브그래프가 처음부터 만들어진 경우 다음 두 명령 중 하나가 의존성으로서 그래프 타입스크립트 라이브러리를 설치할 것입니다. ```sh yarn add --dev @graphprotocol/graph-ts # Yarn npm install --save-dev @graphprotocol/graph-ts # NPM ``` -## API Reference +## API 참조 -The `@graphprotocol/graph-ts` library provides the following APIs: +`@graphprotocol/graph-ts` 라이브러리가 다음과 같은 API들을 제공합니다. -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and the Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. -- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. +- 이더리움 스마트 컨트렉트, 이벤트, 블록, 트랜젝션, 그리고 이더리움 값들과 작업하기 위한 `ethereum` API +- 더그래프 노드 스토어에서 엔티티를 로드하고 저장하기 위한 `store` API +- 더그래프 노드 출력 및 그래프 탐색기에 메세지를 기록하는 `log` API +- IPFS로부터 파일들을 로드하기 위한 `ipfs` API +- JSON 데이터를 구문 분석하는 `json` API +- 암호화 기능을 사용하기 위한 `crypto` API +- Ethereum, JSON, GraphQL 및 AssemblyScript와 같은 다양한 유형 시스템 간의 변환을 위한 저수준 프리미티브 -### Versions +### 버전 -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. The current mapping API version is 0.0.6. +서브그래프 매니페스트의 `apiVersion`은 주어진 서브그래프에 대해 그래프 노드가 실행하는 매핑 API 버전을 지정합니다. 현재 맵핑 API 버전은 0.0.6 입니다. -| Version | Release notes | -|:-------:| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/developer/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| 버전 | 릴리스 노트 | +|:-----:| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.6 | 이더리움 트랜잭션 개체에 `nonce` 필드를 추가했습니다.
`baseFeePerGas`가 이더리움 블록 개체에 추가되었습니다. | +| 0.0.5 | AssemblyScript를 버전 0.19.10으로 업그레이드했습니다(변경 내용 깨짐 포함. [`Migration Guide`](/developer/assemblyscript-migration-guide) 참조)
`ethereum.transaction.gasUsed`의 이름이 `ethereum.transaction.gasLimit`로 변경되었습니다. | +| 0.0.4 | Ethereum SmartContractCall 개체에 `functionSignature` 필드를 추가했습니다. | +| 0.0.3 | Ethereum Call 개체에 `from` 필드를 추가했습니다.
`etherem.call.address`의 이름이 `ethereum.call.to`로 변경되었습니다. | +| 0.0.2 | Ethereum Transaction 개체에 `input` 필드를 추가했습니다. | -### Built-in Types +### 기본 제공 유형 -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types). +AssemblyScript에 내장된 기본 유형에 대한 설명서는 [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types)에서 확인할 수 있습니다. -The following additional types are provided by `@graphprotocol/graph-ts`. +다음의 추가적인 유형들이 `@graphprotocol/graph-ts`에 의해 제공됩니다. #### ByteArray ```typescript -import { ByteArray } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'에서 { ByteArray }를 입력합니다. ``` -`ByteArray` represents an array of `u8`. +`ByteArray`가 `u8`의 배열을 나타냅니다. _Construction_ -- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromI32(x: i32): ByteArray` - `x`를 바이트로 분해합니다. +- `fromHexString(hex: string): ByteArray` - 입력 길이는 반드시 짝수여야 합니다. `0x` 접두사는 선택사항입니다. -_Type conversions_ +_유형 변환_ -- `toHexString(): string` - Converts to a hex string prefixed with `0x`. -- `toString(): string` - Interprets the bytes as a UTF-8 string. -- `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- `toHexString(): string` - 접두사가 `0x`인 16진 문자열로 변환합니다. +- `toString(): string` - 바이트를 UTF-8 문자열로 해석합니다. +- `toBase58(): string` - 바이트를 base58 문자열로 인코딩합니다. +- `toU32(): u32` - 바이트를 little-endian `u32`로 해석합니다. 오버플로우의 경우에는 Throws 합니다. +- `toI32(): i32` - 바이트 배열을 little-endian `i32`로 해석합니다. 오버플로우의 경우에는 Throws 합니다. -_Operators_ +_연산자_ -- `equals(y: ByteArray): bool` – can be written as `x == y`. +- `equals(y: ByteArray): bool` – `x == y`로 쓸 수 있습니다 #### BigDecimal ```typescript -import { BigDecimal } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'로 부터 { BigDecimal }을 입력합니다. ``` -`BigDecimal` is used to represent arbitrary precision decimals. +`BigDecimal`은 임의의 정밀도 소수를 나타내는 데 사용됩니다. _Construction_ -- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. -- `static fromString(s: string): BigDecimal` – parses from a decimal string. +- `constructor(bigInt: BigInt)` – `BigInt`로 부터 `BigDecimal`을 생성합니다. +- `static fromString(s: string): BigDecimal` – 10진수 문자열에서 구문 분석을 수행합니다. -_Type conversions_ +_유형 변환_ -- `toString(): string` – prints to a decimal string. +- `toString(): string` – 10진수 문자열로 인쇄합니다. _Math_ -- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. -- `le(y: BigDecimal): bool` – can be written as `x <= y`. -- `gt(y: BigDecimal): bool` – can be written as `x > y`. -- `ge(y: BigDecimal): bool` – can be written as `x >= y`. -- `neg(): BigDecimal` - can be written as `-x`. +- `plus(y: BigDecimal): BigDecimal` – `x + y`로 쓸 수 있습니다. +- `minus(y: BigDecimal): BigDecimal` – `x - y`로 쓸 수 있습니다. +- `times(y: BigDecimal): BigDecimal` – `x * y`로 쓸 수 있습니다. +- `div(y: BigDecimal): BigDecimal` – `x / y`로 쓸 수 있습니다. +- `equals(y: BigDecimal): bool` – `x == y`로 쓸 수 있습니다. +- `notEqual(y: BigDecimal): bool` – `x != y`로 쓸 수 있습니다. +- `lt(y: BigDecimal): bool` – `x < y`로 쓸 수 있습니다. +- `le(y: BigDecimal): bool` – `x <= y`로 쓸 수 있습니다. +- `gt(y: BigDecimal): bool` – `x > y`로 쓸 수 있습니다. +- `ge(y: BigDecimal): bool` – `x >= y`로 쓸 수 있습니다. +- `neg(): BigDecimal` - `-x`로 쓸 수 있습니다. #### BigInt ```typescript -import { BigInt } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'에서 { BigInt }를 입력합니다. ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +`BigInt`는 큰 정수를 나타내는 데 사용됩니다. 여기에는 `uint32` ~ `uint256` 및 `int64` ~ `int256`값이 포함됩니다. `int32`, `uint24` 혹은 `int8`과 같은 `uint32` 이하는 전부 `i32`로 표시됩니다. -The `BigInt` class has the following API: +`BigInt` 클래스에는 다음의 API가 있습니다: _Construction_ -- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. -- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. -- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. -- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. +- `BigInt.fromI32(x: i32): BigInt` – `i32`로 부터 `BigInt`를 생성합니다. +- `BigInt.fromString(s: string): BigInt`– 문자열로부터 `BigInt`를 구문 분석합니다. +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – `bytes`를 부호 없는 little-endian 정수로 해석합니다. 입력 값이 big-endian인 경우, 먼저 `.reverse()`를 호출하십시오. +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – `bytes`를 signed, little-endian 정수로 해석합니다. 입력 값이 big-endian인 경우, 먼저 `.reverse()`를 호출하십시오. - _Type conversions_ + _유형 변환_ -- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. -- `x.toString(): string` – turns `BigInt` into a decimal number string. -- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if it the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. -- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. +- `x.toHex(): string` – `BigInt`를 16진수 문자열로 바꿉니다. +- `x.toString(): string` – `BigInt`를 10진수 문자열로 바꿉니다. +- `x.toI32(): i32` – `BigInt`를 `i32`로 반환합니다; 만약 값이 `i32`에 부합하지 않으면, 실패합니다. `x.isI32()`를 먼저 확인하는 것이 좋습니다. +- `x.toBigDecimal(): BigDecimal` - 소수 부분 없이 십진수로 변환합니다. _Math_ -- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. -- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. -- `x.times(y: BigInt): BigInt` – can be written as `x * y`. -- `x.div(y: BigInt): BigInt` – can be written as `x / y`. -- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. -- `x.equals(y: BigInt): bool` – can be written as `x == y`. -- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. -- `x.lt(y: BigInt): bool` – can be written as `x < y`. -- `x.le(y: BigInt): bool` – can be written as `x <= y`. -- `x.gt(y: BigInt): bool` – can be written as `x > y`. -- `x.ge(y: BigInt): bool` – can be written as `x >= y`. -- `x.neg(): BigInt` – can be written as `-x`. -- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. -- `x.isZero(): bool` – Convenience for checking if the number is zero. -- `x.isI32(): bool` – Check if the number fits in an `i32`. -- `x.abs(): BigInt` – Absolute value. -- `x.pow(exp: u8): BigInt` – Exponentiation. -- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. -- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. -- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. -- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. +- `x.plus(y: BigInt): BigInt` – `x + y`로 쓸 수 있습니다. +- `x.minus(y: BigInt): BigInt` – `x - y`로 쓸 수 있습니다. +- `x.times(y: BigInt): BigInt` – `x * y`로 쓸 수 있습니다. +- `x.div(y: BigInt): BigInt` – `x / y`로 쓸 수 있습니다. +- `x.mod(y: BigInt): BigInt` – `x % y`로 쓸 수 있습니다. +- `x.equals(y: BigInt): bool` – `x == y`로 쓸 수 있습니다. +- `x.notEqual(y: BigInt): bool` – `x != y`로 쓸 수 있습니다. +- `x.lt(y: BigInt): bool` – `x < y`로 쓸 수 있습니다. +- `x.le(y: BigInt): bool` – `x <= y`로 쓸 수 있습니다. +- `x.gt(y: BigInt): bool` – `x > y`로 쓸 수 있습니다. +- `x.ge(y: BigInt): bool` – `x >= y`로 쓸 수 있습니다. +- `x.neg(): BigInt` – `-x`로 쓸 수 있습니다. +- `x.divDecimal(y: BigDecimal): BigDecimal` – 십진수로 나누어, 십진 결과를 제공합니다. +- `x.isZero(): bool` – 숫자가 0인지 확인하는데 편리합니다. +- `x.isI32(): bool` – 숫자가 `i32`에 부합하는지 확인합니다. +- `x.abs(): BigInt` – 절대값. +- `x.pow(exp: u8): BigInt` – 지수화. +- `bitOr(x: BigInt, y: BigInt): BigInt` – `x | y`로 쓸 수 있습니다. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – `x & y`로 쓸 수 있습니다. +- `leftShift(x: BigInt, bits: u8): BigInt` – `x << y`로 쓸 수 있습니다. +- `rightShift(x: BigInt, bits: u8): BigInt` – `x >> y`로 쓸 수 있습니다. #### TypedMap ```typescript -import { TypedMap } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'에서 { TypedMap }를 입력합니다. ``` -`TypedMap` can be used to stored key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +`TypedMap`는 key-value 쌍을 저장하는데 사용될 수 있습니다. [이 예](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51)를 보시기 바랍니다. -The `TypedMap` class has the following API: +`TypedMap` 클래스에는 다음의 API가 있습니다. -- `new TypedMap()` – creates an empty map with keys of type `K` and values of type `T` -- `map.set(key: K, value: V): void` – sets the value of `key` to `value` -- `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map -- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map -- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not +- `new TypedMap()` – 유형 `K`의 키와 유형 `T`의 값을 사용하여 빈 맵을 생성합니다. +- `map.set(key: K, value: V): void` – `key` 값을 `value`로 설정합니다. +- `map.getEntry(key: K): TypedMapEntry | null` – 만약 `key`가 맵에 존재하지 않는 경우, `key` 혹은 `null` 에 대한 key-value 쌍을 반환합니다. +- `map.get(key: K): V | null` – 만약 `key`가 맵에 존재하지 않으면, `key` 혹은 `null` 값을 반환합니다. +- `map.isSet(key: K): bool` – 만약 `key`는 맵에 존재하나, `false`가 맵에 존재하지 않는 경우, `true`를 반환합니다. #### Bytes ```typescript -import { Bytes } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'에서 { Bytes }를 입력합니다. ``` -`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32` etc. +`Bytes`는 임의 길이의 바이트 배열을 나타내는 데 사용됩니다. 이는 `bytes`, `bytes32` 등의 이더리움 값을 포함합니다. -The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: +`Bytes` 클래스는 AssemblyScript의 [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64)를 확장하며, 모든 `Uint8Array` 기능과 다음과 같은 새 매서드를 지원합니다: -- `b.toHex()` – returns a hexadecimal string representing the bytes in the array -- `b.toString()` – converts the bytes in the array to a string of unicode characters -- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) +- `b.toHex()` – 배열상의 바이트를 나타내는 16진수 문자열을 반환합니다. +- `b.toString()` – 배열상의 바이트를 유니코드 문자 문자열로 변환합니다. +- `b.toBase58()` – 이더리움 바이트 값을 base58 인코딩(IPFS 해시에 사용)으로 변환합니다. #### Address ```typescript -import { Address } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'에서 { Address } 를 입력합니다. ``` -`Address` extends `Bytes` to represent Ethereum `address` values. +`Address`는 `Bytes`를 확장하여 이더리움 `address` 값을 나타냅니다. -It adds the following method on top of the `Bytes` API: +`Bytes` API 위에 다음 메서드를 추가합니다: -- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string +- `Address.fromString(s: string): Address` – 16진수 문자열에서 `Address` 를 생성합니다. ### Store API ```typescript -import { store } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'에서 { store }를 입력합니다. ``` -The `store` API allows to load, save and remove entities from and to the Graph Node store. +`store` API 를 사용하면 더 그래프 노드 스토어에서 엔티티를 로드, 저장 및 제거할 수 있습니다. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +스토어에 작성된 엔티티는 서브그래프의 GraphQL 스키마에 정의된 `@entity` 유형에 일대일로 매핑됩니다. 이러한 엔터티 작업을 편리하게 하기 위해 [Graph CLI](https://github.com/graphprotocol/graph-cli)에서 제공하는 `graph codegen` 명령은 기본 제공 `Entity` 유형의 서브 클래스인 엔터티 클래스를 생성하며, 스키마의 필드에 대한 속성 getter 및 setter와 이러한 엔티티를 로드 및 저장하는 메서드를 사용합니다. #### Creating entities -The following is a common pattern for creating entities from Ethereum events. +다음은 이더리움 이벤트에서 엔티티를 생성하기 위한 일반적인 패턴입니다. ```typescript // Import the Transfer event class generated from the ERC20 ABI @@ -241,13 +241,13 @@ export function handleTransfer(event: TransferEvent): void { } ``` -When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. +체인을 처리하는 동안 `Transfer` 이벤트가 발생하면, 이는 생성된 `Transfer` 유형(엔터티 유형과 이름 충돌이 발생하지 않도록 여기서 `TransferEvent`로 별칭 지정)을 사용하여 `handleTransfer` 이벤트 핸들러에 전달됩니다. 이 유형을 사용하면 이벤트의 상위 트랜잭션 및 해당 매개 변수와 같은 데이터에 액세스할 수 있습니다. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +각 엔티티는 다른 엔티티와의 충돌을 피하기 위해 고유한 ID를 가져야 합니다. 이벤트 매개변수에 사용할 수 있는 고유 식별자가 포함되는 것은 매우 일반적입니다. 참고: 트랜잭션 해시를 ID로 사용하면 동일한 트랜잭션의 다른 이벤트가 이 해시를 ID로 사용하여 엔티티를 만들지 않는다고 가정합니다. -#### Loading entities from the store +#### 스토어에서 엔티티 로드 -If an entity already exists, it can be loaded from the store with the following: +엔티티가 이미 존재하는 경우, 이는 다음을 사용하여 스토어에서 로드할 수 있습니다. ```typescript let id = event.transaction.hash.toHex() // or however the ID is constructed @@ -259,18 +259,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +엔티티가 스토어에 아직 존재하지 않을 수도 있으므로, `load` 메서드는 `Transfer | null` 유형의 값을 반환합니다. 떠라서 해당 값을 사용하기 전에 `null` 케이스를 확인해야 할 수 있습니다. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> **Note:**: 매핑에서 변경한 내용이 엔티티의 이전 데이터에 종속된 경우에만 엔티티 로드가 필요합니다. 다음 섹션에서 기존 엔티티들을 업데이트하는 두 가지 방법을 확인하시기 바랍니다. -#### Updating existing entities +#### 기존 엔티티 업데이트 -There are two ways to update an existing entity: +기존 엔티티를 업데이트 하는 방법에는 두 가지가 있습니다. -1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +1. 엔터티를 로드합니다. `Transfer.load(id)`를 예로들어, 엔터티의 속성을 설정한 다음, 스토어에 다시 `.save()`합니다. +2. `new Transfer(id)`를 예로 들어, 간단하게 엔티티를 생성하기만 하면 됩니다. 엔티티의 속성을 설정한 다음 이를 스토어에 `.save()` 합니다. 만약 엔티티가 이미 존재하는 경우, 변경사항들은 병합됩니다. -Changing properties is straight forward in most cases, thanks to the generated property setters: +속성 변경은 생성된 속성 설정기 덕분에 대부분의 경우 간단합니다. ```typescript let transfer = new Transfer(id) @@ -279,51 +279,51 @@ transfer.to = ... transfer.amount = ... ``` -It is also possible to unset properties with one of the following two instructions: +다음 두 가지 지침 중 하나로 속성을 설정 해제할 수도 있습니다. ```typescript transfer.from.unset() transfer.from = null ``` -This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. +이는 오직 선택적 속성으로만 작동하는데, 예를 들어 GraphQL에서 `!` 없이 표기된 속성들입니다. `owner: Bytes` 혹은 `amount: BigInt`를 두 가지 예로 들 수 있습니다. -Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. +엔터티에서 배열을 가져오면 해당 배열의 복사본이 생성되기 때문에 배열 속성 업데이트는 조금 더 복잡합니다. 이는 배열을 변경한 후 명시적으로 배열 속성을 다시 설정해야 함을 의미합니다. 다음은 `entity`에 `numbers: [BigInt!]!` 필드가 있다고 가정합니다. ```typescript -// This won't work +// 이는 작동하지 않을 것입니다. entity.numbers.push(BigInt.fromI32(1)) entity.save() -// This will work +// 이는 작동 할 것입니다. let numbers = entity.numbers numbers.push(BigInt.fromI32(1)) entity.numbers = numbers entity.save() ``` -#### Removing entities from the store +#### 스토어에서 엔티티 제거하기 -There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: +현재 생성된 유형을 통해 엔티티를 제거할 수 있는 방법은 없습니다. 대신 엔티티를 제거하려면 엔티티 유형의 이름과 엔티티 ID를 `store.remove`에 전달해야 합니다. ```typescript -import { store } from '@graphprotocol/graph-ts' +'@graphprotocol/graph-ts'에서 { store }를 입력합니다. ... let id = event.transaction.hash.toHex() store.remove('Transfer', id) ``` -### Ethereum API +### 이더리움 API -The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. +이더리움 API는 스마트 컨트렉트, 퍼블릭 상태 변수, 컨트렉트 기능, 이벤트, 트랜잭션, 블록 및 이더리움 데이터 인코딩/디코딩에 대한 액세스를 제공합니다. -#### Support for Ethereum Types +#### 이더리움 유형 지원 -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +엔터티와 마찬가지로 `graph codegen`은 서브그래프에서 사용되는 모든 스마트 컨트랙트 및 이벤트에 대한 클래스를 생성합니다. 이를 위해 컨트랙트 ABI는 서브그래프 매니페스트에서 데이터 소스의 일부여야 합니다. 일반적으로 ABI 파일은 `abis/` 폴더에 저장됩니다. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +생성된 클래스를 사용하면 이더리움 유형과 [내장 유형](#built-in-types) 간의 변환이 뒤에서 이루어지므로 서브그래프 작성자는 이에 대해 걱정할 필요가 없습니다. -The following example illustrates this. Given a subgraph schema like +다음의 예가 이를 보여줍니다. 다음과 같은 서브그래프 스키마가 주어지면 ```graphql type Transfer @entity { @@ -333,7 +333,7 @@ type Transfer @entity { } ``` -and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: +그리고 이더리움 상의 `Transfer(address,address,uint256)` 이벤트 서명, `from`, `to` 및 `amount` 유형 값 `address`, `address` 그리고 `uint256`는 `Address` 및 `BigInt`로 변환되고, `Bytes!` 및 `Transfer` 엔티티의 `BigInt!` 속성에 전달됩니다: ```typescript let id = event.transaction.hash.toHex() @@ -344,9 +344,9 @@ transfer.amount = event.params.amount transfer.save() ``` -#### Events and Block/Transaction Data +#### 이벤트 및 블록/트랜젝션 데이터 -Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): +이전의 예시에서 `Transfer` 이벤트에 대해 설명한 바와 같이, 이벤트 핸들로들에게 전달된 이더리움 이벤트들은 이벤트 매개변수에 엑세스를 제공할 뿐만 아니라 상위 트랜잭션과 이벤트 핸들러가 속한 블록에 대한 액세스를 제공합니다. 다음의 데이터는 이벤트 인스턴스(이러한 클래스들은 `graph-ts`의 `ethereum` 모듈의 일부입니다)에서 얻을 수 있습니다: ```typescript class Event { @@ -621,10 +621,10 @@ import { json, JSONValueKind } from '@graphprotocol/graph-ts' JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array - `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: @@ -646,9 +646,9 @@ When the type of a value is certain, it can be converted to a [built-in type](#b - `value.toF64(): f64` - `value.toBigInt(): BigInt` - `value.toString(): string` -- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) +- `value.toArray(): Array` - (이후 `JSONValue`를 상기 5개 방법 중 하나로 변환합니다.) -### Type Conversions Reference +### 유형 변환 참조 | Source(s) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | @@ -688,17 +688,17 @@ When the type of a value is certain, it can be converted to a [built-in type](#b | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### Data Source Metadata +### 데이터 소스 메타데이터 -You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: +`dataSource` 네임스페이스를 통해 핸들러를 호출한 데이터 소스의 계약 주소, 네트워크 및 컨텍스트를 검사할 수 있습니다 - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### Entity and DataSourceContext +### 엔티티 및 Entity and DataSourceContext -The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: +기본 `Entity` 클래스 및 child `DataSourceContext`는 필드를 동적으로 설정하고 필드를 가져오는 도우미가 있습니다. - `setString(key: string, value: string): void` - `setI32(key: string, value: i32): void` From dece50a6ddc780cdfbfa2307fec890f2be45ab16 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:54 -0500 Subject: [PATCH 145/241] New translations assemblyscript-api.mdx (Chinese Simplified) --- pages/zh/developer/assemblyscript-api.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/zh/developer/assemblyscript-api.mdx b/pages/zh/developer/assemblyscript-api.mdx index 2afa431fe8c5..b5066fab02f2 100644 --- a/pages/zh/developer/assemblyscript-api.mdx +++ b/pages/zh/developer/assemblyscript-api.mdx @@ -621,10 +621,10 @@ import { json, JSONValueKind } from '@graphprotocol/graph-ts' JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array - `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: From bba234f18cf44477dd6fdba1754a125f59d952ec Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:55 -0500 Subject: [PATCH 146/241] New translations assemblyscript-api.mdx (Vietnamese) --- pages/vi/developer/assemblyscript-api.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/vi/developer/assemblyscript-api.mdx b/pages/vi/developer/assemblyscript-api.mdx index 2afa431fe8c5..b5066fab02f2 100644 --- a/pages/vi/developer/assemblyscript-api.mdx +++ b/pages/vi/developer/assemblyscript-api.mdx @@ -621,10 +621,10 @@ import { json, JSONValueKind } from '@graphprotocol/graph-ts' JSON data can be parsed using the `json` API: -- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array - `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed -- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` -- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed +- `json.fromString(data: Bytes): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: Bytes): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: From 4af6e9aaf12f25fe4c64d5dfe045946f3962e5d7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:56 -0500 Subject: [PATCH 147/241] New translations assemblyscript-migration-guide.mdx (Spanish) --- .../assemblyscript-migration-guide.mdx | 184 +++++++++--------- 1 file changed, 92 insertions(+), 92 deletions(-) diff --git a/pages/es/developer/assemblyscript-migration-guide.mdx b/pages/es/developer/assemblyscript-migration-guide.mdx index 2db90a608110..acdc2366df9b 100644 --- a/pages/es/developer/assemblyscript-migration-guide.mdx +++ b/pages/es/developer/assemblyscript-migration-guide.mdx @@ -1,50 +1,50 @@ --- -title: AssemblyScript Migration Guide +title: Guia de Migracion de AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Hasta ahora, los subgrafos han utilizado una de las [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finalmente, hemos añadido soporte para la [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +Esto permitirá a los desarrolladores de subgrafos utilizar las nuevas características del lenguaje AS y la libreria estándar. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +Esta guia es aplicable para cualquiera que use `graph-cli`/`graph-ts` bajo la version `0.22.0`. Si ya estás en una versión superior (o igual) a esa, ya has estado usando la versión `0.19.10` de AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Nota: A partir de `0.24.0`, `graph-node` puede soportar ambas versiones, dependiendo del `apiVersion` especificado en el manifiesto del subgrafo. -## Features +## Caracteristicas -### New functionality +### Nueva Funcionalidad -- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) -- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- `TypedArray`s ahora puede construirse desde `ArrayBuffer`s usando el [nuevo `wrap` metodo estatico](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) +- Nuevas funciones de la biblioteca estándar: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Se agrego soporte para x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Se agrego `StaticArray`, una mas eficiente variante de array ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- Se agrego `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Se implemento el argumento `radix` en `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Se agrego soporte para los separadores en los literales de punto flotante ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- Se agrego soporte para las funciones de primera clase ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Se agregaron builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Se implemento `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Se agrego soporte para las plantillas de strings literales ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- Se agrego `encodeURI(Component)` y `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Se agrego `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Se agrego `toUTCString` para `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Se agrego `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### Optimizations +### Optimizaciones -- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) -- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- `Math` funciones como `exp`, `exp2`, `log`, `log2` y `pow` fueron reemplazadas por variantes mas rapidas ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Optimizar ligeramente `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- Caché de más accesos a campos en std Map y Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Optimizar para potencias de dos en `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### Other +### Otros -- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- El tipo de un literal de array puede ahora inferirse a partir de su contenido ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Actualizado stdlib a Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -## How to upgrade? +## Como actualizar? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Cambiar tus asignaciones `apiVersion` en `subgraph.yaml` a `0.0.6`: ```yaml ... @@ -56,7 +56,7 @@ dataSources: ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. Actualiza la `graph-cli` que usas a la `latest` version ejecutando: ```bash # if you have it globally installed @@ -66,20 +66,20 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. Haz lo mismo con `graph-ts`, pero en lugar de instalarlo globalmente, guárdalo en tus dependencias principales: ```bash npm install --save @graphprotocol/graph-ts@latest ``` -4. Follow the rest of the guide to fix the language breaking changes. -5. Run `codegen` and `deploy` again. +4. Sigue el resto de la guía para arreglar los cambios que rompen el idioma. +5. Ejecuta `codegen` y `deploy` nuevamente. -## Breaking changes +## Rompiendo los esquemas -### Nullability +### Anulabilidad -On the older version of AssemblyScript, you could create code like this: +En la versión anterior de AssemblyScript, podías crear un código como el siguiente: ```typescript function load(): Value | null { ... } @@ -88,7 +88,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -However on the newer version, because the value is nullable, it requires you to check, like this: +Sin embargo, en la versión más reciente, debido a que el valor es anulable, es necesario que lo compruebes, así: ```typescript let maybeValue = load() @@ -98,7 +98,7 @@ if (maybeValue) { } ``` -Or force it like this: +O forzarlo asi: ```typescript let maybeValue = load()! // breaks in runtime if value is null @@ -106,11 +106,11 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +Si no estás seguro de cuál elegir, te recomendamos que utilices siempre la versión segura. Si el valor no existe, es posible que quieras hacer una declaración if temprana con un retorno en tu handler de subgrafo. ### Variable Shadowing -Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: +Antes podías hacer [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) y un código como este funcionaría: ```typescript let a = 10 @@ -118,7 +118,7 @@ let b = 20 let a = a + b ``` -However now this isn't possible anymore, and the compiler returns this error: +Sin embargo, ahora esto ya no es posible, y el compilador devuelve este error: ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -127,9 +127,9 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` -You'll need to rename your duplicate variables if you had variable shadowing. -### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +Tendrás que cambiar el nombre de las variables duplicadas si tienes una variable shadowing. +### Comparaciones Nulas +Al hacer la actualización en ut subgrafo, a veces pueden aparecer errores como estos: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -137,7 +137,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +Para solucionarlo puedes simplemente cambiar la declaracion `if` por algo así: ```typescript if (!decimals) { @@ -147,23 +147,23 @@ To solve you can simply change the `if` statement to something like this: if (decimals === null) { ``` -The same applies if you're doing != instead of ==. +Lo mismo ocurre si haces != en lugar de ==. ### Casting -The common way to do casting before was to just use the `as` keyword, like this: +La forma común de hacer el casting antes era simplemente usar la palabra clave `as`, así: ```typescript let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -However this only works in two scenarios: +Sin embargo, esto sólo funciona en dos casos: -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); -- Upcasting on class inheritance (subclass → superclass) +- Casting de primitivas (entre tipos como `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Upcasting en la herencia de clases (subclase → superclase) -Examples: +Ejemplos: ```typescript // primitive casting @@ -179,10 +179,10 @@ class Bytes extends Uint8Array {} let bytes = new Bytes(2) < Uint8Array > bytes // same as: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +Hay dos escenarios en los que puede querer cast, pero usando `as`/`var` **no es seguro**: -- Downcasting on class inheritance (superclass → subclass) -- Between two types that share a superclass +- Downcasting en la herencia de clases (superclase → subclase) +- Entre dos tipos que comparten una superclase ```typescript // downcasting on class inheritance @@ -199,7 +199,7 @@ class ByteArray extends Uint8Array {} let bytes = new Bytes(2) < ByteArray > bytes // breaks in runtime :( ``` -For those cases, you can use the `changetype` function: +Para esos casos, puedes usar la funcion`changetype`: ```typescript // downcasting on class inheritance @@ -218,7 +218,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. +Si sólo quieres eliminar la anulabilidad, puedes seguir usando el `as` operador (o `variable`), pero asegúrate de que el valor no puede ser nulo, de lo contrario se romperá. ```typescript // remove nullability @@ -231,18 +231,18 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +Para el caso de la anulabilidad se recomienda echar un vistazo al [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), hara que tu codigo sea mas limpio 🙂 -Also we've added a few more static methods in some types to ease casting, they are: +También hemos añadido algunos métodos estáticos más en algunos tipos para facilitar el casting, son: - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### Nullability check with property access +### Comprobación de anulabilidad con acceso a la propiedad -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +Para usar el [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) puedes usar la declaracion `if` o el operador ternario (`?` and `:`) asi: ```typescript let something: string | null = 'data' @@ -260,7 +260,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +Sin embargo eso sólo funciona cuando estás haciendo el `if` / ternario en una variable, no en un acceso a una propiedad, como este: ```typescript class Container { @@ -273,7 +273,7 @@ container.data = 'data' let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile ``` -Which outputs this error: +Lo que produce este error: ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. @@ -281,7 +281,7 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` -To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: +Para solucionar este problema, puedes crear una variable para ese acceso a la propiedad de manera que el compilador pueda hacer la magia de la comprobación de nulidad: ```typescript class Container { @@ -296,9 +296,9 @@ let data = container.data let somethingOrElse: string = data ? data : 'else' // compiles just fine :) ``` -### Operator overloading with property access +### Sobrecarga de operadores con acceso a propiedades -If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. +Si intentas sumar (por ejemplo) un tipo anulable (desde un acceso a una propiedad) con otro no anulable, el compilador de AssemblyScript en lugar de dar un error en el tiempo de compilación advirtiendo que uno de los valores es anulable, simplemente compila en silencio, dando oportunidad a que el código se rompa en tiempo de ejecución. ```typescript class BigInt extends Uint8Array { @@ -322,7 +322,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +Hemos abierto un tema en el compilador de AssemblyScript para esto, pero por ahora si haces este tipo de operaciones en tus mapeos de subgrafos, deberías cambiarlos para hacer una comprobación de nulos antes de ello. ```typescript let wrapper = new Wrapper(y) @@ -334,9 +334,9 @@ if (!wrapper.n) { wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt ``` -### Value initialization +### Inicialización del valor -If you have any code like this: +Si tienes algún código como este: ```typescript var value: Type // null @@ -344,7 +344,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +Compilará pero se romperá en tiempo de ejecución, eso ocurre porque el valor no ha sido inicializado, así que asegúrate de que tu subgrafo ha inicializado sus valores, así: ```typescript var value = new Type() // initialized @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Also if you have nullable properties in a GraphQL entity, like this: +También si tienes propiedades anulables en una entidad GraphQL, como esta: ```graphql type Total @entity { @@ -361,7 +361,7 @@ type Total @entity { } ``` -And you have code similar to this: +Y tienes un código similar a este: ```typescript let total = Total.load('latest') @@ -373,7 +373,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +Tendrás que asegurarte de inicializar el valor `total.amount`, porque si intentas acceder como en la última línea para la suma, se bloqueará. Así que o bien la inicializas primero: ```typescript let total = Total.load('latest') @@ -386,7 +386,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +O simplemente puedes cambiar tu esquema GraphQL para no usar un tipo anulable para esta propiedad, entonces la inicializaremos como cero en el paso `codegen` 😉 ```graphql type Total @entity { @@ -405,9 +405,9 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -### Class property initialization +### Inicialización de las propiedades de la clase -If you export any classes with properties that are other classes (declared by you or by the standard library) like this: +Si exportas alguna clase con propiedades que son otras clases (declaradas por ti o por la libreria estándar) así: ```typescript class Thing {} @@ -417,7 +417,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +El compilador dará un error porque tienes que añadir un inicializador para las propiedades que son clases, o añadir el operador `!`: ```typescript export class Something { @@ -441,11 +441,11 @@ export class Something { } ``` -### GraphQL schema +### Esquema GraphQL -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +Esto no es un cambio directo de AssemblyScript, pero es posible que tengas que actualizar tu archivo `schema.graphql`. -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: +Ahora ya no puedes definir campos en tus tipos que sean Listas No Anulables. Si tienes un esquema como este: ```graphql type Something @entity { @@ -458,7 +458,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +Tendrás que añadir un `!` al miembro de la Lista tipo, así: ```graphql type Something @entity { @@ -471,14 +471,14 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +Esto ha cambiado debido a las diferencias de anulabilidad entre las versiones de AssemblyScript, y está relacionado con el archivo `src/generated/schema.ts` (ruta por defecto, puede que lo hayas cambiado). -### Other +### Otros -- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) -- Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Alineado `Map#set` y `Set#add` con el spec, devolviendo `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Las arrays ya no heredan de ArrayBufferView, sino que son distintas ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Las clases inicializadas a partir de literales de objetos ya no pueden definir un constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- El resultado de una operación binaria `**` es ahora el entero denominador común si ambos operandos son enteros. Anteriormente, el resultado era un flotante como si se llamara a `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- Coercionar `NaN` a `false` cuando casting a `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- Al desplazar un valor entero pequeño de tipo `i8`/`u8` o `i16`/`u16`, sólo los 3 o 4 bits menos significativos del valor RHS afectan al resultado, de forma análoga al resultado de un `i32.shl` que sólo se ve afectado por los 5 bits menos significativos del valor RHS. Ejemplo: `someI8 << 8` previamente producia el valor `0`, pero ahora produce `someI8` debido a enmascarar el RHS como `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- Corrección de errores en las comparaciones de strings relacionales cuando los tamaños difieren ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) From 58bec464b1ee6f29793b2dab06a3364ace507c5e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:57 -0500 Subject: [PATCH 148/241] New translations assemblyscript-migration-guide.mdx (Arabic) --- .../assemblyscript-migration-guide.mdx | 158 +++++++++--------- 1 file changed, 79 insertions(+), 79 deletions(-) diff --git a/pages/ar/developer/assemblyscript-migration-guide.mdx b/pages/ar/developer/assemblyscript-migration-guide.mdx index 2db90a608110..38ed98f0c53b 100644 --- a/pages/ar/developer/assemblyscript-migration-guide.mdx +++ b/pages/ar/developer/assemblyscript-migration-guide.mdx @@ -1,50 +1,50 @@ --- -title: AssemblyScript Migration Guide +title: دليل ترحيل AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +حتى الآن ، كانت ال Subgraphs تستخدم أحد [ الإصدارات الأولى من AssemblyScript ](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). أخيرًا ، أضفنا الدعم لـ [ أحدث دعم متاح ](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +سيمكن ذلك لمطوري ال Subgraph من استخدام مميزات أحدث للغة AS والمكتبة القياسية. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +ينطبق هذا الدليل على أي شخص يستخدم `graph-cli`/`graph-ts` ادنى من الإصدار `0.22.0`. إذا كنت تستخدم بالفعل إصدارًا أعلى من (أو مساويًا) لذلك ، فأنت بالفعل تستخدم الإصدار `0.19.10` من AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> ملاحظة: اعتبارًا من `0.24.0` ، يمكن أن يدعم `grapg-node` كلا الإصدارين ، اعتمادًا على `apiVersion` المحدد في Subgraph manifest. -## Features +## مميزات -### New functionality +### وظائف جديدة - `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- وظائف المكتبة القياسية الجديدة`String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- تمت إضافة دعم لـ x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- تمت إضافة`Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- تم تنفيذ`radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) - Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- دعم إضافي لوظائف الدرجة الأولى ([ v0.14.0 ](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- إضافة البناء: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- تنفيذ `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) - Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- أضف`encodeURI(Component)` و `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- أضف`toString`, `toDateString` و `toTimeString` ل `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- أضف`toUTCString` ل `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- أضف`nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### Optimizations +### التحسينات -- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- `Math` دوال مثل `exp`, `exp2`, `log`, `log2` and `pow` تم استبدالها بمتغيرات أسرع ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- أكثر تحسينا `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) - Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- قم بتحسين قدرات اثنين في `ipow32 / 64` ([ v0.18.2 ](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### Other +### آخر -- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- يمكن الآن استنتاج نوع array literal من محتوياتها([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- تم تحديث stdlib إلى Unicode 13.0.0([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -## How to upgrade? +## كيف تقوم بالترقية؟ -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. تغيير ال Mappings الخاص بك `apiVersion` في `subgraph.yaml` إلى `0.0.6`: ```yaml ... @@ -56,7 +56,7 @@ dataSources: ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. قم بتحديث `graph-cli` الذي تستخدمه إلى `أحدث إصدار` عن طريق تشغيل: ```bash # if you have it globally installed @@ -66,20 +66,20 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. افعل الشيء نفسه مع `graph-ts` ، ولكن بدلاً من التثبيت بشكل عام ، احفظه في dependencies الرئيسية: ```bash npm install --save @graphprotocol/graph-ts@latest ``` 4. Follow the rest of the guide to fix the language breaking changes. -5. Run `codegen` and `deploy` again. +5. قم بتشغيل `codegen` و `deploy` مرة أخرى. ## Breaking changes ### Nullability -On the older version of AssemblyScript, you could create code like this: +في الإصدار الأقدم من AssemblyScript ، يمكنك إنشاء كود مثل هذا: ```typescript function load(): Value | null { ... } @@ -88,7 +88,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -However on the newer version, because the value is nullable, it requires you to check, like this: +ولكن في الإصدار الأحدث ، نظرًا لأن القيمة nullable ، فإنها تتطلب منك التحقق ، مثل هذا: ```typescript let maybeValue = load() @@ -98,7 +98,7 @@ if (maybeValue) { } ``` -Or force it like this: +أو إجباره على هذا النحو: ```typescript let maybeValue = load()! // breaks in runtime if value is null @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +إذا لم تكن متأكدًا من اختيارك ، فنحن نوصي دائمًا باستخدام الإصدار الآمن. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. ### Variable Shadowing @@ -118,7 +118,7 @@ let b = 20 let a = a + b ``` -However now this isn't possible anymore, and the compiler returns this error: +لكن هذا لم يعد ممكنًا الآن ، ويعيد المترجم هذا الخطأ: ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -127,9 +127,9 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` -You'll need to rename your duplicate variables if you had variable shadowing. -### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +ستحتاج إلى إعادة تسمية المتغيرات المكررة إذا كان لديك variable shadowing. +### مقارنات ملغية(Null Comparisons) +من خلال إجراء الترقية على ال Subgraph الخاص بك ، قد تحصل أحيانًا على أخطاء مثل هذه: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -137,7 +137,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +لحل المشكلة يمكنك ببساطة تغيير عبارة `if` إلى شيء مثل هذا: ```typescript if (!decimals) { @@ -147,23 +147,23 @@ To solve you can simply change the `if` statement to something like this: if (decimals === null) { ``` -The same applies if you're doing != instead of ==. +الأمر نفسه ينطبق إذا كنت تفعل! = بدلاً من ==. ### Casting -The common way to do casting before was to just use the `as` keyword, like this: +كانت الطريقة الشائعة لإجراء ال Casting من قبل هي استخدام `as`كلمة رئيسية ، مثل هذا: ```typescript let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -However this only works in two scenarios: +لكن هذا لا يعمل إلا في سيناريوهين: -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Primitive casting (بين انواع مثل`u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); - Upcasting on class inheritance (subclass → superclass) -Examples: +أمثلة: ```typescript // primitive casting @@ -179,10 +179,10 @@ class Bytes extends Uint8Array {} let bytes = new Bytes(2) < Uint8Array > bytes // same as: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +هناك سيناريوهين قد ترغب في ال cast ، ولكن باستخدام`as`/`var` **ليس آمنا**: - Downcasting on class inheritance (superclass → subclass) -- Between two types that share a superclass +- بين نوعين يشتركان في فئة superclass ```typescript // downcasting on class inheritance @@ -199,7 +199,7 @@ class ByteArray extends Uint8Array {} let bytes = new Bytes(2) < ByteArray > bytes // breaks in runtime :( ``` -For those cases, you can use the `changetype` function: +في هذه الحالة يمكنك إستخدام`changetype` دالة: ```typescript // downcasting on class inheritance @@ -218,7 +218,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. +إذا كنت تريد فقط إزالة nullability ، فيمكنك الاستمرار في استخدام `as` (أو `variable`) ، ولكن تأكد من أنك تعرف أن القيمة لا يمكن أن تكون خالية ، وإلا فإنه سوف ينكسر. ```typescript // remove nullability @@ -231,23 +231,23 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +بالنسبة لحالة ال nullability ، نوصي بإلقاء نظرة على [ مميزة التحقق من nullability ](https://www.assemblyscript.org/basics.html#nullability-checks) ، ستجعل الكود أكثر نظافة 🙂 -Also we've added a few more static methods in some types to ease casting, they are: +أضفنا أيضًا بعض ال static methods في بعض الأنواع وذلك لتسهيل عملية ال Casting ، وهي: - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### Nullability check with property access +### التحقق من Nullability مع الوصول الى الخاصية -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +لاستخدام [ مميزة التحقق من nullability ](https://www.assemblyscript.org/basics.html#nullability-checks) ، يمكنك استخدام عبارات `if` أو عامل التشغيل الثلاثي (`؟` and `:`) مثل هذا: ```typescript let something: string | null = 'data' -let somethingOrElse = something ? something : 'else' +let somethingOrElse = something ؟ something : 'else' // or @@ -260,7 +260,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +ومع ذلك ، فإن هذا لا يعمل إلا عند تنفيذ `if` / ternary على متغير ، وليس على خاصية الوصول ، مثل هذا: ```typescript class Container { @@ -270,15 +270,15 @@ class Container { let container = new Container() container.data = 'data' -let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile +let somethingOrElse: string = container.data ؟ container.data : 'else' // doesn't compile ``` -Which outputs this error: +الذي يخرج هذا الخطأ: ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. - let somethingOrElse: string = container.data ? container.data : "else"; + let somethingOrElse: string = container.data ؟ container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: @@ -293,10 +293,10 @@ container.data = 'data' let data = container.data -let somethingOrElse: string = data ? data : 'else' // compiles just fine :) +let somethingOrElse: string = data ؟ data : 'else' // compiles just fine :) ``` -### Operator overloading with property access +### التحميل الزائد للمشغل مع الوصول للخاصية If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. @@ -322,7 +322,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +لقد فتحنا مشكلة في مترجم AssemblyScript ، ولكن في الوقت الحالي إذا أجريت هذا النوع من العمليات في Subgraph mappings ، فيجب عليك تغييرها لإجراء فحص ل null قبل ذلك. ```typescript let wrapper = new Wrapper(y) @@ -334,9 +334,9 @@ if (!wrapper.n) { wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt ``` -### Value initialization +### تهيئة القيمة -If you have any code like this: +إذا كان لديك أي كود مثل هذا: ```typescript var value: Type // null @@ -344,7 +344,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +سيتم تجميعها لكنها ستتوقف في وقت التشغيل ، وهذا يحدث لأن القيمة لم تتم تهيئتها ، لذا تأكد من أن ال subgraph قد قام بتهيئة قيمها ، على النحو التالي: ```typescript var value = new Type() // initialized @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Also if you have nullable properties in a GraphQL entity, like this: +وأيضًا إذا كانت لديك خصائص ل nullable في كيان GraphQL ، مثل هذا: ```graphql type Total @entity { @@ -361,7 +361,7 @@ type Total @entity { } ``` -And you have code similar to this: +ولديك كود مشابه لهذا: ```typescript let total = Total.load('latest') @@ -373,7 +373,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +ستحتاج إلى التأكد من تهيئة`total.amount`القيمة ، لأنه إذا حاولت الوصول كما في السطر الأخير للمجموع ، فسوف يتعطل. لذلك إما أن تقوم بتهيئته أولاً: ```typescript let total = Total.load('latest') @@ -386,7 +386,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +أو يمكنك فقط تغيير مخطط GraphQL الخاص بك بحيث لا تستخدم نوع nullable لهذه الخاصية ، ثم سنقوم بتهيئته على أنه صفر في الخطوة`codegen`😉 ```graphql type Total @entity { @@ -405,9 +405,9 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -### Class property initialization +### تهيئة خاصية الفئة -If you export any classes with properties that are other classes (declared by you or by the standard library) like this: +إذا قمت بتصدير أي فئات ذات خصائص فئات أخرى (تم تعريفها بواسطتك أو بواسطة المكتبة القياسية) مثل هذا: ```typescript class Thing {} @@ -417,7 +417,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +فإن المترجم سيخطئ لأنك ستحتاج إما إضافة مُهيئ للخصائص التي هي فئات ، أو إضافة عامل التشغيل `!`: ```typescript export class Something { @@ -441,11 +441,11 @@ export class Something { } ``` -### GraphQL schema +### مخطط GraphQL -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +هذا ليس تغيير مباشرا ل AssemblyScript ، ولكن قد تحتاج إلى تحديث ملف `schema.graphql` الخاص بك. -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: +الآن لم يعد بإمكانك تعريف الحقول في الأنواع الخاصة بك والتي هي قوائم Non-Nullable. إذا كان لديك مخطط مثل هذا: ```graphql type Something @entity { @@ -458,7 +458,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +سيتعين عليك إضافة `!` لعضو من نوع القائمة ، مثل هذا: ```graphql type Something @entity { @@ -471,14 +471,14 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +هذا التغير بسبب اختلافات ال nullability بين إصدارات AssemblyScript وهو مرتبط بملف`src/generated/schema.ts` (المسار الافتراضي ، ربما تكون قد غيرت هذا). -### Other +### آخر - Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- لم تعد المصفوفة ترث من ArrayBufferView ، لكنها أصبحت متميزة الآن ([ v0.10.0 ](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) - Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- نتيجة العملية الثنائية `**` هي الآن العدد الصحيح للمقام المشترك إذا كان كلا المعاملين عددًا صحيحًا. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- إجبار`NaN` إلى `false` عندما ال casting إلى`bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) - When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) - Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) From 70479237dca7f927e5c7f8626c273cf75b4c7ec4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:09:58 -0500 Subject: [PATCH 149/241] New translations assemblyscript-migration-guide.mdx (Japanese) --- .../assemblyscript-migration-guide.mdx | 130 +++++++++--------- 1 file changed, 65 insertions(+), 65 deletions(-) diff --git a/pages/ja/developer/assemblyscript-migration-guide.mdx b/pages/ja/developer/assemblyscript-migration-guide.mdx index 2db90a608110..951158bf610b 100644 --- a/pages/ja/developer/assemblyscript-migration-guide.mdx +++ b/pages/ja/developer/assemblyscript-migration-guide.mdx @@ -1,18 +1,18 @@ --- -title: AssemblyScript Migration Guide +title: AssemblyScript マイグレーションガイド --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +これまでサブグラフは、[AssemblyScriptの最初のバージョン](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6)を使用していました。 ついに[最新のバージョン](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10)(v0.19.10) のサポートを追加しました! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +これにより、サブグラフの開発者は、AS言語と標準ライブラリの新しい機能を使用できるようになります。 -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +このガイドは、バージョン`0.22.0`以下の`graph-cli`/`graph-ts` をお使いの方に適用されます。 もしあなたがすでにそれ以上のバージョンにいるなら、あなたはすでに AssemblyScript のバージョン`0.19.10` を使っています。 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> 注:`0.24.0`以降、`graph-node`はサブグラフマニフェストで指定された`apiVersion`に応じて、両方のバージョンをサポートしています。 -## Features +## 特徴 -### New functionality +### 新機能 - `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) - New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) @@ -30,21 +30,21 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` - Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) - Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### Optimizations +### 最適化 - `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) - Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) - Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### Other +### その他 - The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -## How to upgrade? +## アップグレードの方法 -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. `subgraph.yaml`のマッピングの`apiVersion`を`0.0.6`に変更してください。 ```yaml ... @@ -56,7 +56,7 @@ dataSources: ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. 使用している`graph-cli`を`最新版`に更新するには、次のように実行します。 ```bash # if you have it globally installed @@ -66,20 +66,20 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. `graph-ts`についても同様ですが、グローバルにインストールするのではなく、メインの依存関係に保存します。 ```bash npm install --save @graphprotocol/graph-ts@latest ``` -4. Follow the rest of the guide to fix the language breaking changes. -5. Run `codegen` and `deploy` again. +4. ガイドの残りの部分に従って、言語の変更を修正します。 +5. `codegen`を実行し、再度`deploy`します。 -## Breaking changes +## 変更点 ### Nullability -On the older version of AssemblyScript, you could create code like this: +古いバージョンのAssemblyScriptでは、以下のようなコードを作ることができました: ```typescript function load(): Value | null { ... } @@ -88,7 +88,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -However on the newer version, because the value is nullable, it requires you to check, like this: +しかし、新しいバージョンでは、値がnullableであるため、次のようにチェックする必要があります: ```typescript let maybeValue = load() @@ -98,7 +98,7 @@ if (maybeValue) { } ``` -Or force it like this: +あるいは、次のように強制します: ```typescript let maybeValue = load()! // breaks in runtime if value is null @@ -106,11 +106,11 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +どちらを選択すべきか迷った場合は、常に安全なバージョンを使用することをお勧めします。 値が存在しない場合は、サブグラフハンドラの中でreturnを伴う初期のif文を実行するとよいでしょう。 -### Variable Shadowing +### 変数シャドウイング -Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: +以前は、[変数のシャドウイング](https://en.wikipedia.org/wiki/Variable_shadowing)を行うことができ、次のようなコードが動作していました。 ```typescript let a = 10 @@ -118,7 +118,7 @@ let b = 20 let a = a + b ``` -However now this isn't possible anymore, and the compiler returns this error: +しかし、現在はこれができなくなり、コンパイラは次のようなエラーを返します。 ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -127,9 +127,9 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` -You'll need to rename your duplicate variables if you had variable shadowing. -### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +変数シャドウイングを行っていた場合は、重複する変数の名前を変更する必要があります。 +### Null比較 +サブグラフのアップグレードを行うと、時々以下のようなエラーが発生することがあります。 ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -137,7 +137,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +解決するには、 `if` 文を以下のように変更するだけです。 ```typescript if (!decimals) { @@ -147,23 +147,23 @@ To solve you can simply change the `if` statement to something like this: if (decimals === null) { ``` -The same applies if you're doing != instead of ==. +これは、==ではなく!=の場合も同様です。 -### Casting +### キャスト -The common way to do casting before was to just use the `as` keyword, like this: +以前の一般的なキャストの方法は、次のように`as`キーワードを使うだけでした。 ```typescript let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -However this only works in two scenarios: +しかし、これは2つのシナリオでしか機能しません。 -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); -- Upcasting on class inheritance (subclass → superclass) +- プリミティブなキャスト(between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- クラス継承のアップキャスティング(サブクラス→スーパークラス) -Examples: +例 ```typescript // primitive casting @@ -179,10 +179,10 @@ class Bytes extends Uint8Array {} let bytes = new Bytes(2) < Uint8Array > bytes // same as: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +キャストしたくても、`as`/`var`を使うと**安全ではない**というシナリオが2つあります。 -- Downcasting on class inheritance (superclass → subclass) -- Between two types that share a superclass +- クラス継承のダウンキャスト(スーパークラス → サブクラス) +- スーパークラスを共有する2つの型の間 ```typescript // downcasting on class inheritance @@ -199,7 +199,7 @@ class ByteArray extends Uint8Array {} let bytes = new Bytes(2) < ByteArray > bytes // breaks in runtime :( ``` -For those cases, you can use the `changetype` function: +このような場合には、`changetype`関数を使用します。 ```typescript // downcasting on class inheritance @@ -218,7 +218,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. +単にnull性を除去したいだけなら、`as` オペレーター(or `variable`)を使い続けることができますが、値がnullではないことを確認しておかないと壊れてしまいます。 ```typescript // remove nullability @@ -231,18 +231,18 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +Nullabilityについては、[nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks)を利用することをお勧めします。 -Also we've added a few more static methods in some types to ease casting, they are: +また、キャストを容易にするために、いくつかの型にスタティックメソッドを追加しました。 - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### Nullability check with property access +### プロパティアクセスによるNullabilityチェック -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +[nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks)を使用するには、次のように`if`文や三項演算子(`?` and `:`) を使用します。 ```typescript let something: string | null = 'data' @@ -260,7 +260,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +しかし、これは、以下のように、プロパティのアクセスではなく、変数に対して`if`/ternaryを行っている場合にのみ機能します。 ```typescript class Container { @@ -273,7 +273,7 @@ container.data = 'data' let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile ``` -Which outputs this error: +すると、このようなエラーが出力されます。 ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. @@ -281,7 +281,7 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` -To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: +この問題を解決するには、そのプロパティアクセスのための変数を作成して、コンパイラがnullability checkのマジックを行うようにします。 ```typescript class Container { @@ -296,9 +296,9 @@ let data = container.data let somethingOrElse: string = data ? data : 'else' // compiles just fine :) ``` -### Operator overloading with property access +### プロパティアクセスによるオペレーターオーバーロード -If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. +アセンブリスクリプトのコンパイラは、値の片方がnullableであることを警告するコンパイル時のエラーを出さずに、ただ黙ってコンパイルするので、実行時にコードが壊れる可能性があります。 ```typescript class BigInt extends Uint8Array { @@ -322,7 +322,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +この件に関して、アセンブリ・スクリプト・コンパイラーに問題を提起しましたが、 今のところ、もしサブグラフ・マッピングでこの種の操作を行う場合には、 その前にNULLチェックを行うように変更してください。 ```typescript let wrapper = new Wrapper(y) @@ -334,9 +334,9 @@ if (!wrapper.n) { wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt ``` -### Value initialization +### 値の初期化 -If you have any code like this: +もし、このようなコードがあった場合: ```typescript var value: Type // null @@ -344,7 +344,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +これは、値が初期化されていないために起こります。したがって、次のようにサブグラフが値を初期化していることを確認してください。 ```typescript var value = new Type() // initialized @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Also if you have nullable properties in a GraphQL entity, like this: +また、以下のようにGraphQLのエンティティにNullableなプロパティがある場合も同様です。 ```graphql type Total @entity { @@ -361,7 +361,7 @@ type Total @entity { } ``` -And you have code similar to this: +そして、以下のようなコードになります: ```typescript let total = Total.load('latest') @@ -373,7 +373,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +`total.amount`の値を確実に初期化する必要があります。なぜなら、最後の行のsumのようにアクセスしようとすると、クラッシュしてしまうからです。 そのため、最初に初期化する必要があります。 ```typescript let total = Total.load('latest') @@ -386,7 +386,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +あるいは、このプロパティに nullable 型を使用しないように GraphQL スキーマを変更することもできます。そうすれば、`コード生成`の段階でゼロとして初期化されます。 ```graphql type Total @entity { @@ -405,9 +405,9 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -### Class property initialization +### クラスのプロパティの初期化 -If you export any classes with properties that are other classes (declared by you or by the standard library) like this: +以下のように、他のクラス(自分で宣言したものや標準ライブラリで宣言したもの)のプロパティを持つクラスをエクスポートした場合、そのクラスのプロパティを初期化します: ```typescript class Thing {} @@ -417,7 +417,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +コンパイラがエラーになるのは、クラスであるプロパティにイニシャライザを追加するか、`!` オペレーターを追加する必要があるからです。 ```typescript export class Something { @@ -441,11 +441,11 @@ export class Something { } ``` -### GraphQL schema +### GraphQLスキーマ -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +これはAssemblyScriptの直接的な変更ではありませんが、`schema.graphql`ファイルを更新する必要があるかもしれません。 -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: +タイプの中にNon-Nullable Listのフィールドを定義することができなくなりました。 次のようなスキーマを持っているとします。 ```graphql type Something @entity { @@ -458,7 +458,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +Listタイプのメンバーには、以下のように`!` を付ける必要があります。 ```graphql type Something @entity { @@ -471,9 +471,9 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +これはAssemblyScriptのバージョンによるnullabilityの違いから変更されたもので、`src/generated/schema.ts`ファイル(デフォルトのパス、あなたはこれを変更したかもしれません)に関連しています。 -### Other +### その他 - Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) From af8d1d9a423ca9d24b568f13db9a8381d02c93c9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:01 -0500 Subject: [PATCH 150/241] New translations distributed-systems.mdx (Arabic) --- pages/ar/developer/distributed-systems.mdx | 50 +++++++++++----------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/pages/ar/developer/distributed-systems.mdx b/pages/ar/developer/distributed-systems.mdx index 894fcbe2e18b..aa4b1b4174b7 100644 --- a/pages/ar/developer/distributed-systems.mdx +++ b/pages/ar/developer/distributed-systems.mdx @@ -1,37 +1,37 @@ --- -title: Distributed Systems +title: الانظمة الموزعة --- -The Graph is a protocol implemented as a distributed system. +The Graph هو بروتوكول يتم تنفيذه كنظام موزع. -Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. +فشل الاتصالات. وصول الطلبات خارج الترتيب. أجهزة الكمبيوتر المختلفة ذات الساعات والحالات غير المتزامنة تعالج الطلبات ذات الصلة. الخوادم تعيد التشغيل. حدوث عمليات Re-orgs بين الطلبات. هذه المشاكل متأصلة في جميع الأنظمة الموزعة ولكنها تتفاقم في الأنظمة التي تعمل على نطاق عالمي. -Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. +ضع في اعتبارك هذا المثال لما قد يحدث إذا قام العميل بـ polls للمفهرس للحصول على أحدث البيانات أثناء re-org. -1. Indexer ingests block 8 -2. Request served to the client for block 8 -3. Indexer ingests block 9 -4. Indexer ingests block 10A -5. Request served to the client for block 10A -6. Indexer detects reorg to 10B and rolls back 10A -7. Request served to the client for block 9 -8. Indexer ingests block 10B -9. Indexer ingests block 11 -10. Request served to the client for block 11 +1. المفهرس يستوعب الكتلة 8 +2. تقديم الطلب للعميل للمجموعة 8 +3. يستوعب المفهرس الكتلة 9 +4. المفهرس يستوعب الكتلة 10A +5. تقديم الطلب للعميل للكتلة 10A +6. يكتشف المفهرس reorg لـ 10B ويسترجع 10A +7. تقديم الطلب للعميل للكتلة 9 +8. المفهرس يستوعب الكتلة 10B +9. المفهرس يستوعب الكتلة 11 +10. تقديم الطلب للعميل للكتلة 11 -From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. +من وجهة نظر المفهرس ، تسير الأمور إلى الأمام بشكل منطقي. الوقت يمضي قدما ، على الرغم من أننا اضطررنا إلى التراجع عن كتلة الـ uncle وتشغيل الكتلة وفقا للاتفاق. على طول الطريق ، يقدم المفهرس الطلبات باستخدام أحدث حالة يعرفها في ذلك الوقت. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +لكن من وجهة نظر العميل ، تبدو الأمور مشوشة. يلاحظ العميل أن الردود كانت للكتل 8 و 10 و 9 و 11 بهذا الترتيب. نسمي هذا مشكلة "تذبذب الكتلة". عندما يواجه العميل تذبذبا في الكتلة ، فقد تظهر البيانات متناقضة مع نفسها بمرور الوقت. يزداد الموقف سوءا عندما نعتبر أن المفهرسين لا يستوعبون جميع الكتل الأخيرة في وقت واحد ، وقد يتم توجيه طلباتك إلى عدة مفهرسين. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +تقع على عاتق العميل والخادم مسؤولية العمل معا لتوفير بيانات متسقة للمستخدم. يجب استخدام طرق مختلفة اعتمادا على الاتساق المطلوب حيث لا يوجد برنامج واحد مناسب لكل مشكلة. -Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. +الاستنتاج من خلال الآثار المترتبة على الأنظمة الموزعة أمر صعب ، لكن الإصلاح قد لا يكون كذلك! لقد أنشأنا APIs وأنماط لمساعدتك على تصفح بعض حالات الاستخدام الشائعة. توضح الأمثلة التالية هذه الأنماط ولكنها لا تزال تتجاهل التفاصيل التي يتطلبها كود الإنتاج (مثل معالجة الأخطاء والإلغاء) حتى لا يتم تشويش الأفكار الرئيسية. -## Polling for updated data +## Polling للبيانات المحدثة -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph يوفر `block: { number_gte: $minBlock }` API ، والتي تضمن أن تكون الاستجابة لكتلة واحدة تزيد أو تساوي `$minBlock`. إذا تم إجراء الطلب لـ `graph-node` instance ولم تتم مزامنة الكتلة الدنيا بعد ، فسيرجع `graph-node` بخطأ. إذا قام `graph-node` بمزامنة الكتلة الدنيا ، فسيتم تشغيل الاستجابة لأحدث كتلة. إذا تم تقديم الطلب إلى Edge & Node Gateway ، ستقوم الـ Gateway بفلترة المفهرسين الذين لم يقوموا بعد بمزامنة الكتلة الدنيا وتجعل الطلب لأحدث كتلة قام المفهرس بمزامنتها. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +يمكننا استخدام `number_gte` لضمان عدم عودة الوقت إلى الوراء عند عمل polling للبيانات في الحلقة. هنا مثال لذلك: ```javascript /// Updates the protocol.paused variable to the latest @@ -73,11 +73,11 @@ async function updateProtocolPaused() { } ``` -## Fetching a set of related items +## جلب مجموعة من العناصر المرتبطة -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +حالة أخرى هي جلب مجموعة كبيرة أو بشكل عام جلب العناصر المرتبطة عبر طلبات متعددة. على عكس حالة الـ polling (حيث كان التناسق المطلوب هو المضي قدما في الزمن) ، فإن الاتساق المطلوب هو لنقطة واحدة في الزمن. -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +هنا سوف نستخدم الوسيطة `block: { hash: $blockHash }` لتثبيت جميع نتائجنا في نفس الكتلة. ```javascript /// Gets a list of domain names from a single block using pagination @@ -129,4 +129,4 @@ async function getDomainNames() { } ``` -Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. +لاحظ أنه في حالة re-org ، سيحتاج العميل إلى إعادة المحاولة من الطلب الأول لتحديث hash الكتلة إلى كتلة non-uncle. From b9f78b72fc67df216e439c294d8fdcde52b3b901 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:03 -0500 Subject: [PATCH 151/241] New translations querying-from-your-app.mdx (Spanish) --- pages/es/developer/querying-from-your-app.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/es/developer/querying-from-your-app.mdx b/pages/es/developer/querying-from-your-app.mdx index c09c44efee72..fb8c7895afaa 100644 --- a/pages/es/developer/querying-from-your-app.mdx +++ b/pages/es/developer/querying-from-your-app.mdx @@ -1,10 +1,10 @@ --- -title: Querying from an Application +title: Consultar desde una Aplicacion --- -Once a subgraph is deployed to the Subgraph Studio or to the Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: +Una vez que un subgrafo es desplegado en Subgraph Studio o en The Graph Explorer, se te dará el endpoint para tu API GraphQL que debería ser algo así: -**Subgraph Studio (testing endpoint)** +**Subgraph Studio (endpoint de prueba)** ```sh Queries (HTTP) @@ -18,23 +18,23 @@ Queries (HTTP) https://gateway.thegraph.com/api//subgraphs/id/ ``` -Using the GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with the data indexed by the subgraph. +Usando el endpoint de GraphQL, puedes usar varias librerías de Clientes de GraphQL para consultar el subgrafo y rellenar tu aplicación con los datos indexados por el subgrafo. -Here are a couple of the more popular GraphQL clients in the ecosystem and how to use them: +A continuación se presentan un par de clientes GraphQL más populares en el ecosistema y cómo utilizarlos: -### Apollo client +### Cliente Apollo -[Apollo client](https://www.apollographql.com/docs/) supports web projects including frameworks like React and Vue, as well as mobile clients like iOS, Android, and React Native. +[Apollo client](https://www.apollographql.com/docs/) admite proyectos web que incluyen frameworks como React y Vue, así como clientes móviles como iOS, Android y React Native. -Let's look at how fetch data from a subgraph with Apollo client in a web project. +Veamos cómo obtener datos de un subgrafo con el cliente Apollo en un proyecto web. -First, install `@apollo/client` and `graphql`: +Primero, instala `@apollo/client` y `graphql`: ```sh npm install @apollo/client graphql ``` -Then you can query the API with the following code: +A continuación, puedes consultar la API con el siguiente código: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -67,7 +67,7 @@ client }) ``` -To use variables, you can pass in a `variables` argument to the query: +Para utilizar variables, puedes pasar un argumento `variables` a la consulta: ```javascript const tokensQuery = ` @@ -100,17 +100,17 @@ client ### URQL -Another option is [URQL](https://formidable.com/open-source/urql/), a somewhat lighter weight GraphQL client library. +Otra opción es [URQL](https://formidable.com/open-source/urql/), una libreria cliente de GraphQL algo más ligera. -Let's look at how fetch data from a subgraph with URQL in a web project. +Veamos cómo obtener datos de un subgrafo con URQL en un proyecto web. -First, install `urql` and `graphql`: +Primero, instala `urql` and `graphql`: ```sh npm install urql graphql ``` -Then you can query the API with the following code: +A continuación, puedes consultar la API con el siguiente código: ```javascript import { createClient } from 'urql' From 6f12ab4801f018e901adb7fb65e5368b494f5470 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:04 -0500 Subject: [PATCH 152/241] New translations querying-from-your-app.mdx (Arabic) --- pages/ar/developer/querying-from-your-app.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/ar/developer/querying-from-your-app.mdx b/pages/ar/developer/querying-from-your-app.mdx index c09c44efee72..f3decc0d1768 100644 --- a/pages/ar/developer/querying-from-your-app.mdx +++ b/pages/ar/developer/querying-from-your-app.mdx @@ -1,40 +1,40 @@ --- -title: Querying from an Application +title: الاستعلام من التطبيق --- -Once a subgraph is deployed to the Subgraph Studio or to the Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: +بمجرد نشر ال Subgraph في Subgraph Studio أو في Graph Explorer ، سيتم إعطاؤك endpoint ل GraphQL API الخاصة بك والتي يجب أن تبدو كما يلي: -**Subgraph Studio (testing endpoint)** +**Subgraph Studio (اختبار endpoint)** ```sh -Queries (HTTP) +استعلامات (HTTP) https://api.studio.thegraph.com/query/// ``` **Graph Explorer** ```sh -Queries (HTTP) +استعلامات (HTTP) https://gateway.thegraph.com/api//subgraphs/id/ ``` -Using the GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with the data indexed by the subgraph. +باستخدام GraphQL endpoint ، يمكنك استخدام العديد من مكتبات GraphQL Client للاستعلام عن ال Subgraph وملء تطبيقك بالبيانات المفهرسة بواسطة ال Subgraph. Here are a couple of the more popular GraphQL clients in the ecosystem and how to use them: ### Apollo client -[Apollo client](https://www.apollographql.com/docs/) supports web projects including frameworks like React and Vue, as well as mobile clients like iOS, Android, and React Native. +[Apoolo client ](https://www.apollographql.com/docs/)يدعم مشاريع الويب بما في ال framework مثل React و Vue ، بالإضافة إلى mobile clients مثل iOS و Android و React Native. -Let's look at how fetch data from a subgraph with Apollo client in a web project. +لنلقِ نظرة على كيفية جلب البيانات من Subgraph وذلك باستخدام Apollo client في مشروع ويب. -First, install `@apollo/client` and `graphql`: +اولا قم بتثبيت `@apollo/client` and `graphql`: ```sh npm install @apollo/client graphql ``` -Then you can query the API with the following code: +بعد ذلك يمكنك الاستعلام عن API بالكود التالي: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -67,7 +67,7 @@ client }) ``` -To use variables, you can pass in a `variables` argument to the query: +لاستخدام المتغيرات، يمكنك التمرير في`variables`ل argument الاستعلام: ```javascript const tokensQuery = ` @@ -100,17 +100,17 @@ client ### URQL -Another option is [URQL](https://formidable.com/open-source/urql/), a somewhat lighter weight GraphQL client library. +هناك خيار آخر وهو [ URQL ](https://formidable.com/open-source/urql/) ، وهي مكتبة GraphQL client أخف وزنا إلى حد ما. -Let's look at how fetch data from a subgraph with URQL in a web project. +لنلقِ نظرة على كيفية جلب البيانات من Subgraph باستخدام URQL في مشروع ويب. -First, install `urql` and `graphql`: +اولا قم بتثبيت `urql` و `graphql`: ```sh npm install urql graphql ``` -Then you can query the API with the following code: +بعد ذلك يمكنك الاستعلام عن API بالكود التالي: ```javascript import { createClient } from 'urql' From e04a1b82a71a7f6e7a9e7c7c36a62298c2cbab64 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:06 -0500 Subject: [PATCH 153/241] New translations querying-from-your-app.mdx (Japanese) --- pages/ja/developer/querying-from-your-app.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/ja/developer/querying-from-your-app.mdx b/pages/ja/developer/querying-from-your-app.mdx index 9038d2ee3790..e94a6f50046e 100644 --- a/pages/ja/developer/querying-from-your-app.mdx +++ b/pages/ja/developer/querying-from-your-app.mdx @@ -1,10 +1,10 @@ --- -title: Querying from an Application +title: アプリケーションからのクエリ --- -Once a subgraph is deployed to the Subgraph Studio or to the Graph Explorer, you will be given the endpoint for your GraphQL API that should look something like this: +サブグラフがSubgraph StudioまたはGraph Explorerにデプロイされると、GraphQL APIのエンドポイントが与えられ、以下のような形になります。 -**Subgraph Studio (testing endpoint)** +**Subgraph Studio (テスト用エンドポイント)** ```sh Queries (HTTP) @@ -18,23 +18,23 @@ Queries (HTTP) https://gateway.thegraph.com/api//subgraphs/id/ ``` -Using the GraphQL endpoint, you can use various GraphQL Client libraries to query the subgraph and populate your app with the data indexed by the subgraph. +GraphQLエンドポイントを使用すると、さまざまなGraphQLクライアントライブラリを使用してサブグラフをクエリし、サブグラフによってインデックス化されたデータをアプリに入力することができます。 -Here are a couple of the more popular GraphQL clients in the ecosystem and how to use them: +ここでは、エコシステムで人気のあるGraphQLクライアントをいくつか紹介し、その使い方を説明します: -### Apollo client +### Apolloクライアント -[Apollo client](https://www.apollographql.com/docs/) supports web projects including frameworks like React and Vue, as well as mobile clients like iOS, Android, and React Native. +[Apolloクライアント](https://www.apollographql.com/docs/)は、ReactやVueなどのフレームワークを含むWebプロジェクトや、iOS、Android、React Nativeなどのモバイルクライアントをサポートしています。 -Let's look at how fetch data from a subgraph with Apollo client in a web project. +WebプロジェクトでApolloクライアントを使ってサブグラフからデータを取得する方法を見てみましょう。 -First, install `@apollo/client` and `graphql`: +まず、`@apollo/client`と`graphql`をインストールします: ```sh npm install @apollo/client graphql ``` -Then you can query the API with the following code: +その後、以下のコードでAPIをクエリできます: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -67,7 +67,7 @@ client }) ``` -To use variables, you can pass in a `variables` argument to the query: +変数を使うには、クエリの引数に`variables` を渡します。 ```javascript const tokensQuery = ` @@ -100,17 +100,17 @@ client ### URQL -Another option is [URQL](https://formidable.com/open-source/urql/), a somewhat lighter weight GraphQL client library. +もう一つの選択肢は[URQL](https://formidable.com/open-source/urql/)で、URQLは、やや軽量なGraphQLクライアントライブラリです。 -Let's look at how fetch data from a subgraph with URQL in a web project. +URQLは、やや軽量なGraphQLクライアントライブラリです。 -First, install `urql` and `graphql`: +WebプロジェクトでURQLを使ってサブグラフからデータを取得する方法を見てみましょう。 まず、`urql`と`graphql`をインストールします。 ```sh npm install urql graphql ``` -Then you can query the API with the following code: +その後、以下のコードでAPIをクエリできます: ```javascript import { createClient } from 'urql' From ef3d31a27461d4e08ee0d44064efca3bbd31c6a5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:08 -0500 Subject: [PATCH 154/241] New translations quick-start.mdx (Spanish) --- pages/es/developer/quick-start.mdx | 98 +++++++++++++++--------------- 1 file changed, 49 insertions(+), 49 deletions(-) diff --git a/pages/es/developer/quick-start.mdx b/pages/es/developer/quick-start.mdx index 6893d424ddc2..a75c04fadbd1 100644 --- a/pages/es/developer/quick-start.mdx +++ b/pages/es/developer/quick-start.mdx @@ -1,17 +1,17 @@ --- -title: Quick Start +title: Comienzo Rapido --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph on: +Esta guía te llevará rápidamente a través de cómo inicializar, crear y desplegar tu subgrafo en: -- **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** -- **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) +- **Subgraph Studio** - usado solo para subgrafos que indexan en **Ethereum mainnet** +- **Hosted Service** - usado para subgrafos que indexan **otras redes** fuera de Ethereum mainnet (e.g. Binance, Matic, etc) ## Subgraph Studio -### 1. Install the Graph CLI +### 1. Instala The Graph CLI -The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. +The Graph CLI esta escrito en JavaScript y necesitaras tener `npm` o `yarn` instalado para usarlo. ```sh # NPM @@ -21,51 +21,51 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. Inicializa tu Subgrafo -- Initialize your subgraph from an existing contract. +- Inicializa tu subgrafo a partir de un contrato existente. ```sh graph init --studio ``` -- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. +- El slug de tu subgrafo es un identificador para tu subgrafo. La herramienta CLI te guiará a través de los pasos para crear un subgrafo, como la address del contrato, la red, etc., como puedes ver en la captura de pantalla siguiente. -![Subgraph command](/img/Subgraph-Slug.png) +![Comando de Subgrafo](/img/Subgraph-Slug.png) -### 3. Write your Subgraph +### 3. Escribe tu Subgrafo -The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +Los comandos anteriores crean un subgrafo de andamio que puedes utilizar como punto de partida para construir tu subgrafo. Al realizar cambios en el subgrafo, trabajarás principalmente con tres archivos: -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. +- Manifest (subgraph.yaml) - El manifiesto define qué fuentes de datos indexarán tus subgrafos. +- Schema (schema.graphql) - El esquema GraphQL define los datos que deseas recuperar del subgrafo. +- AssemblyScript Mappings (mapping.ts) - Este es el código que traduce los datos de tus fuentes de datos a las entidades definidas en el esquema. -For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). +Para más información sobre cómo escribir tu subgrafo, mira [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. Deploy to the Subgraph Studio +### 4. Despliega en Subgraph Studio -- Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. -- Click "Create" and enter the subgraph slug you used in step 2. -- Run these commands in the subgraph folder +- Ve a Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) y conecta tu wallet. +- Haz clic en "Crear" e introduce el subgrafo que utilizaste en el paso 2. +- Ejecuta estos comandos en la carpeta subgrafo ```sh $ graph codegen $ graph build ``` -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +- Autentica y despliega tu subgrafo. La clave para desplegar se puede encontrar en la página de Subgraph en Subgraph Studio. ```sh $ graph auth --studio $ graph deploy --studio ``` -- You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` +- Se te pedirá una etiqueta de versión. Se recomienda encarecidamente utilizar las siguientes convenciones para nombrar tus versiones. Ejemplo: `0.0.1`, `v1`, `version1` -### 5. Check your logs +### 5. Comprueba tus registros -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +Los registros deberían indicarte si hay algún error. Si tu subgrafo está fallando, puedes consultar la fortaleza del subgrafo utilizando la función [GraphiQL Playground](https://graphiql-online.com/). Usa [este endpoint](https://api.thegraph.com/index-node/graphql). Ten en cuenta que puedes aprovechar la consulta de abajo e introducir tu ID de despliegue para tu subgrafo. En este caso, `Qm...` es el ID de despliegue (que puede ser obtenido en la pagina de the Subgraph debado de **Details**). La siguiente consulta te dirá cuándo falla un subgrafo para que puedas depurar en consecuencia: ```sh { @@ -109,15 +109,15 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. Consulta tu Subgrafo -You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). +Ahora puedes consultar tu subgrafo siguiendo [estas instrucciones](/developer/query-the-graph). Puedes consultar desde tu dapp si no tienes tu clave de API a través de la URL de consulta temporal, libre y de tarifa limitada, que puede utilizarse para el desarrollo y la puesta en marcha. Puedes leer las instrucciones adicionales sobre cómo consultar un subgrafo desde una aplicación frontend [aquí](/developer/querying-from-your-app). -## Hosted Service +## Servicio Alojado -### 1. Install the Graph CLI +### 1. Instala The Graph CLI -"The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. +"The Graph CLI es un paquete npm y necesitarás `npm` o `yarn` instalado para usarlo. ```sh # NPM @@ -127,39 +127,39 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. Inicializa tu Subgrafo -- Initialize your subgraph from an existing contract. +- Inicializa tu subgrafo a partir de un contrato existente. ```sh $ graph init --product hosted-service --from-contract
``` -- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` +- Se te pedirá un nombre de subgrafo. El formato es `/`. Ex: `graphprotocol/examplesubgraph` -- If you'd like to initialize from an example, run the command below: +- Si quieres inicializar desde un ejemplo, ejecuta el siguiente comando: ```sh $ graph init --product hosted-service --from-example ``` -- In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- En el caso del ejemplo, el subgrafo se basa en el contrato Gravity de Dani Grant que gestiona los avatares de los usuarios y emite `NewGravatar` o `UpdateGravatar` cada vez que se crean o actualizan los avatares. -### 3. Write your Subgraph +### 3. Escribe tu Subgrafo -The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: +El comando anterior habrá creado un andamio a partir del cual puedes construir tu subgrafo. Al realizar cambios en el subgrafo, trabajarás principalmente con tres archivos: -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index -- Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema +- Manifest (subgraph.yaml) - El manifiesto define qué fuentes de datos indexará tu subgrafo +- Schema (schema.graphql) - El esquema GraphQL define los datos que deseas recuperar del subgrafo +- AssemblyScript Mappings (mapping.ts) - Este es el código que traduce los datos de tus fuentes de datos a las entidades definidas en el esquema -For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). +Para más información sobre cómo escribir tu subgrafo, mira [Create a Subgraph](/developer/create-subgraph-hosted). -### 4. Deploy your Subgraph +### 4. Despliega tu Subgrafo -- Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account -- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. -- Run codegen in the subgraph folder +- Firma en el [Hosted Service](https://thegraph.com/hosted-service/) usando tu cuenta github +- Haz clic en Add Subgraph y rellena la información requerida. Utiliza el mismo nombre de subgrafo que en el paso 2. +- Ejecuta codegen en la carpeta del subgrafo ```sh # NPM @@ -169,16 +169,16 @@ $ npm run codegen $ yarn codegen ``` -- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. +- Agrega tu token de acceso y despliega tu subgrafo. El token de acceso se encuentra en tu panel de control en el Servicio Alojado (Hosted Service). ```sh $ graph auth --product hosted-service $ graph deploy --product hosted-service / ``` -### 5. Check your logs +### 5. Comprueba tus registros -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +Los registros deberían indicarte si hay algún error. Si tu subgrafo está fallando, puedes consultar la fortaleza del subgrafo utilizando la función [GraphiQL Playground](https://graphiql-online.com/). Usa [este endpoint](https://api.thegraph.com/index-node/graphql). Ten en cuenta que puedes aprovechar la consulta de abajo e introducir tu ID de despliegue para tu subgrafo. En este caso, `Qm...` es el ID de despliegue (que puede ser obtenido en la pagina de the Subgraph debado de **Details**). La siguiente consulta te dirá cuándo falla un subgrafo para que puedas depurar en consecuencia: ```sh { @@ -222,6 +222,6 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. Consulta tu Subgrafo -Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. +Sigue [estas instrucciones](/hosted-service/query-hosted-service) para consultar tu subgrafo en el Servicio Alojado (Hosted Service). From 7b999bf6766b03293661a553c47faecb2a2ac709 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:09 -0500 Subject: [PATCH 155/241] New translations quick-start.mdx (Arabic) --- pages/ar/developer/quick-start.mdx | 88 +++++++++++++++--------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/pages/ar/developer/quick-start.mdx b/pages/ar/developer/quick-start.mdx index d66ecb5b38b6..5a245d65141a 100644 --- a/pages/ar/developer/quick-start.mdx +++ b/pages/ar/developer/quick-start.mdx @@ -1,17 +1,17 @@ --- -title: Quick Start +title: بداية سريعة --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph on: +سيأخذك هذا الدليل سريعا ويعلمك كيفية تهيئة وإنشاء ونشر Subgraph الخاص بك على: - **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** -- **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) +- **Hosted Service** - يتم استخدامها ل Subgraphs التي تفهرس ** الشبكات الأخرى ** خارج Ethereum mainnet (مثل Binance و Matic والخ..) ## Subgraph Studio -### 1. Install the Graph CLI +### 1. قم بتثبيت Graph CLI -The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. +تمت كتابة Graph CLI بلغة JavaScript وستحتاج إلى تثبيت إما `npm` أو `yarn` لاستخدامه. ```sh # NPM @@ -21,51 +21,51 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. قم بتهيئة Subgraph الخاص بك -- Initialize your subgraph from an existing contract. +- ابدأ ال Subgraph الخاص بك من عقد موجود. ```sh graph init --studio ``` -- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. +- مؤشر ال Subgraph الخاص بك هو معرف ل Subgraph الخاص بك. ستوجهك أداة CLI لخطوات إنشاء Subgraph ، مثل عنوان العقد والشبكة الخ.. كما ترى في لقطة الشاشة أدناه. -![Subgraph command](/img/Subgraph-Slug.png) +![أمر Subgraph](/img/Subgraph-Slug.png) -### 3. Write your Subgraph +### 3. اكتب subgraph الخاص بك -The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +تقوم الأوامر السابقة بإنشاء ركيزة ال Subgraph والتي يمكنك استخدامها كنقطة بداية لبناء subgraph الخاص بك. عند إجراء تغييرات على ال subgraph ، ستعمل بشكل أساسي على ثلاثة ملفات: -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. +- : (Manifest(subgraph.yaml يحدد ال manifest مصادر البيانات التي سيقوم Subgraphs الخاص بك بفهرستها. +- مخطط (schema.graphql) - يحدد مخطط GraphQL البيانات التي ترغب في استردادها من Subgraph. +- (AssemblyScript Mappings (mapping.ts هذا هو الكود الذي يترجم البيانات من مصادر البيانات الخاصة بك إلى الكيانات المحددة في المخطط. -For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). +لمزيد من المعلومات حول كيفية كتابة Subgraph ، راجع [ إنشاء Subgraph ](/developer/create-subgraph-hosted). ### 4. Deploy to the Subgraph Studio -- Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. +- انتقل إلى Subgraph Studio [ https://thegraph.com/studio/ ](https://thegraph.com/studio/) وقم بتوصيل محفظتك. - Click "Create" and enter the subgraph slug you used in step 2. -- Run these commands in the subgraph folder +- قم بتشغيل هذه الأوامر في مجلد Subgraph ```sh $ graph codegen $ graph build ``` -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +- وثق وأنشر ال Subgraph الخاص بك. يمكن العثور على مفتاح النشر في صفحة Subgraph في Subgraph Studio. ```sh $ graph auth --studio $ graph deploy --studio ``` -- You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` +- سيتم طلب منك تسمية الإصدار. يوصى بشدة باستخدام المصطلحات التالية لتسمية الإصدارات الخاصة بك. مثال: `0.0.1` ، `v1` ، `version1` -### 5. Check your logs +### 5. تحقق من السجلات الخاصة بك -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +السجلات ستخبرك في حالة وجود أخطاء. في حالة فشل Subgraph ، يمكنك الاستعلام عن صحة Subgraph وذلك باستخدام [ GraphiQL Playground ](https://graphiql-online.com/). استخدم [ لهذا ال endpoint ](https://api.thegraph.com/index-node/graphql). لاحظ أنه يمكنك الاستفادة من الاستعلام أدناه وإدخال ID النشر ل Subgraph الخاص بك. في هذه الحالة ، `Qm...` هو ID النشر (والذي يمكن أن يوجد في صفحة ال Subgraph ضمن ** التفاصيل **). سيخبرك الاستعلام أدناه عند فشل Subgraph حتى تتمكن من تصحيح الأخطاء وفقًا لذلك: ```sh { @@ -109,15 +109,15 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. الاستعلام عن ال Subgraph الخاص بك -You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). +يمكنك الآن الاستعلام عن Subgraph باتباع [ هذه الإرشادات ](/developer/query-the-graph). يمكنك الاستعلام من ال dapp الخاص بك إذا لم يكن لديك API Key الخاص بك وذلك عبر عنوان URL الخاص بالاستعلام المؤقت المجاني والمحدود والذي يمكن استخدامه للتطوير والتشغيل. يمكنك قراءة الإرشادات الإضافية حول كيفية الاستعلام عن رسم بياني فرعي من [ هنا ](/developer/querying-from-your-app). ## الخدمة المستضافة -### 1. Install the Graph CLI +### 1. قم بتثبيت Graph CLI -"The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. +"Graph CLI عبارة عن حزمة npm وستحتاج إلى تثبيت `npm` أو `yarn` لاستخدامها. ```sh # NPM @@ -127,15 +127,15 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. قم بتهيئة Subgraph الخاص بك -- Initialize your subgraph from an existing contract. +- ابدأ ال Subgraph الخاص بك من عقد موجود. ```sh $ graph init --product hosted-service --from-contract
``` -- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` +- سيُطلب منك اسم Subgraph. التنسيق هو `/`. مثال: `graphprotocol/examplesubgraph` - If you'd like to initialize from an example, run the command below: @@ -143,23 +143,23 @@ $ graph init --product hosted-service --from-contract
$ graph init --product hosted-service --from-example ``` -- In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- في حالة المثال ، يعتمد Subgraph على عقد Gravity بواسطة Dani Grant الذي يدير ال avatars للمستخدم ويصدر أحداث `NewGravatar` أو `UpdateGravatar` كلما تم إنشاء ال avatars أو تحديثها. -### 3. Write your Subgraph +### 3. اكتب subgraph الخاص بك -The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: +سيكون الأمر السابق قد أنشأ ركيزة حيث يمكنك Subgraph الخاص بك. عند إجراء تغييرات على ال subgraph ، ستعمل بشكل أساسي على ثلاثة ملفات: -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index -- Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema +- : (Manifest(subgraph.yaml يحدد ال manifest مصادر البيانات التي سيفهرسها ال Subgraph +- مخطط (schema.graphql) - يحدد مخطط GraphQL البيانات التي ترغب في جلبها من Subgraph +- (AssemblyScript Mappings (mapping.ts هذا هو الكود الذي يترجم البيانات من مصادر البيانات الخاصة بك إلى الكيانات المحددة في المخطط -For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). +لمزيد من المعلومات حول كيفية كتابة Subgraph ، راجع [ إنشاء Subgraph ](/developer/create-subgraph-hosted). -### 4. Deploy your Subgraph +### 4. انشر ال subgraph الخاص بك -- Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account -- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. -- Run codegen in the subgraph folder +- سجّل الدخول إلى [ الخدمة المستضافة ](https://thegraph.com/hosted-service/) باستخدام حسابك على github +- انقر فوق إضافة Subgraph واملأ المعلومات المطلوبة. استخدم نفس اسم ال Subgraph كما في الخطوة 2. +- قم بتشغيل codegen في مجلد ال Subgraph ```sh # NPM @@ -169,16 +169,16 @@ $ npm run codegen $ yarn codegen ``` -- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. +- أضف توكن الوصول الخاص بك وانشر ال Subgraph الخاص بك. يتم العثور على توكن الوصول في لوحة التحكم في ال Hosted service. ```sh $ graph auth --product hosted-service $ graph deploy --product hosted-service / ``` -### 5. Check your logs +### 5. تحقق من السجلات الخاصة بك -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +السجلات ستخبرك في حالة وجود أخطاء. في حالة فشل Subgraph ، يمكنك الاستعلام عن صحة Subgraph وذلك باستخدام [ GraphiQL Playground ](https://graphiql-online.com/). استخدم [ لهذا ال endpoint ](https://api.thegraph.com/index-node/graphql). لاحظ أنه يمكنك الاستفادة من الاستعلام أدناه وإدخال ID النشر ل Subgraph الخاص بك. في هذه الحالة ، `Qm...` هو ID النشر (والذي يمكن أن يوجد في صفحة ال Subgraph ضمن ** التفاصيل **). سيخبرك الاستعلام أدناه عند فشل Subgraph حتى تتمكن من تصحيح الأخطاء وفقًا لذلك: ```sh { @@ -222,6 +222,6 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. الاستعلام عن ال Subgraph الخاص بك -Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. +اتبع [ هذه الإرشادات ](/hosted-service/query-hosted-service) للاستعلام عن ال Subgraph الخاص بك على ال Hosted service. From 935587b3f24e2696118d95b287979802c259c4a8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:10 -0500 Subject: [PATCH 156/241] New translations quick-start.mdx (Japanese) --- pages/ja/developer/quick-start.mdx | 96 +++++++++++++++--------------- 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/pages/ja/developer/quick-start.mdx b/pages/ja/developer/quick-start.mdx index 6893d424ddc2..023f229a1f39 100644 --- a/pages/ja/developer/quick-start.mdx +++ b/pages/ja/developer/quick-start.mdx @@ -1,17 +1,17 @@ --- -title: Quick Start +title: クイックスタート --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph on: +このガイドでは、サブグラフの初期化、作成、デプロイの方法を素早く説明します: -- **Subgraph Studio** - used only for subgraphs that index **Ethereum mainnet** -- **Hosted Service** - used for subgraphs that index **other networks** outside of Ethereum mainnnet (e.g. Binance, Matic, etc) +- **Subgraph Studio** - **Ethereum mainnet**をインデックスするサブグラフにのみ使用されます。 +- **Hosted Service** - Ethereumメインネット以外の **他のネットワーク**(Binance、Maticなど)にインデックスを付けるサブグラフに使用されます。 ## Subgraph Studio -### 1. Install the Graph CLI +### 1. Graph CLIのインストール -The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. +Graph CLIはJavaScriptで書かれており、使用するには `npm` または `yarn` のいずれかをインストールする必要があります。 ```sh # NPM @@ -21,51 +21,51 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. サブグラフの初期化 -- Initialize your subgraph from an existing contract. +- 既存のコントラクトからサブグラフを初期化します。 ```sh graph init --studio ``` -- Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, such as contract address, network, etc as you can see in the screenshot below. +- サブグラフのスラッグは、サブグラフの識別子です。 CLIツールでは、以下のスクリーンショットに見られるように、コントラクトアドレス、ネットワークなど、サブグラフを作成するための手順を説明します。 ![Subgraph command](/img/Subgraph-Slug.png) -### 3. Write your Subgraph +### 3. サブグラフの作成 -The previous commands creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +前述のコマンドでは、サブグラフを作成するための出発点として使用できるscaffoldサブグラフを作成します。 サブグラフに変更を加える際には、主に3つのファイルを使用します: -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. +- マニフェスト (subgraph.yaml) - マニフェストは、サブグラフがインデックスするデータソースを定義します。 +- スキーマ (schema.graphql) - GraphQLスキーマは、サブグラフから取得したいデータを定義します。 +- AssemblyScript Mappings (mapping.ts) - データソースからのデータを、スキーマで定義されたエンティティに変換するコードです。 -For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). +サブグラフの書き方の詳細については、 [Create a Subgraph](/developer/create-subgraph-hosted) を参照してください。 -### 4. Deploy to the Subgraph Studio +### 4. Subgraph Studioへのデプロイ -- Go to the Subgraph Studio [https://thegraph.com/studio/](https://thegraph.com/studio/) and connect your wallet. -- Click "Create" and enter the subgraph slug you used in step 2. -- Run these commands in the subgraph folder +- [https://thegraph.com/studio/](https://thegraph.com/studio/) にアクセスし、ウォレットを接続します。 +- 「Create」をクリックし、ステップ2で使用したサブグラフのスラッグを入力します。 +- サブグラフのフォルダで以下のコマンドを実行します。 ```sh $ graph codegen $ graph build ``` -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +- サブグラフの認証とデプロイを行います。 デプロイキーは、Subgraph StudioのSubgraphページにあります。 ```sh $ graph auth --studio $ graph deploy --studio ``` -- You will be asked for a version label. It's strongly recommended to use the following conventions for naming your versions. Example: `0.0.1`, `v1`, `version1` +- バージョンラベルの入力を求められます。 バージョンラベルの命名には、以下のような規約を使用することを強くお勧めします。 例: `0.0.1`, `v1`, `version1` -### 5. Check your logs +### 5. ログの確認 -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +エラーが発生した場合は、ログを確認してください。 サブグラフが失敗している場合は、 [GraphiQL Playground](https://graphiql-online.com/) を使ってサブグラフの健全性をクエリすることができます。 [このエンドポイント](https://api.thegraph.com/index-node/graphql) を使用します。 なお、以下のクエリを活用して、サブグラフのデプロイメントIDを入力することができます。 この場合、 `Qm...` がデプロイメントIDです(これはSubgraphページの**Details**に記載されています)。 以下のクエリは、サブグラフが失敗したときに教えてくれるので、適宜デバッグすることができます: ```sh { @@ -109,15 +109,15 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. サブグラフのクエリ -You can now query your subgraph by following [these instructions](/developer/query-the-graph). You can query from your dapp if you don't have your API key via the free, rate limited temporary query URL that can be used for development and staging. You can read the additional instructions for how to query a subgraph from a frontend application [here](/developer/querying-from-your-app). +[以下の手順](/developer/query-the-graph)でサブグラフのクエリを実行できます。 APIキーを持っていない場合は、開発やステージングに使用できる無料の一時的なクエリURLを使って、自分のdappからクエリを実行できます。 フロントエンドアプリケーションからサブグラフを照会する方法については、[こちら](/developer/querying-from-your-app)の説明をご覧ください。 -## Hosted Service +## ホスティングサービス -### 1. Install the Graph CLI +### 1. Graph CLIのインストール -"The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. +"Graph CLI "はnpmパッケージなので、使用するには`npm`または `yarn`がインストールされていなければなりません。 ```sh # NPM @@ -127,39 +127,39 @@ $ npm install -g @graphprotocol/graph-cli $ yarn global add @graphprotocol/graph-cli ``` -### 2. Initialize your Subgraph +### 2. サブグラフの初期化 -- Initialize your subgraph from an existing contract. +- 既存のコントラクトからサブグラフを初期化します。 ```sh $ graph init --product hosted-service --from-contract
``` -- You will be asked for a subgraph name. The format is `/`. Ex: `graphprotocol/examplesubgraph` +- サブグラフの名前を聞かれます。 形式は`/`です。 例:`graphprotocol/examplesubgraph` -- If you'd like to initialize from an example, run the command below: +- 例題から初期化したい場合は、以下のコマンドを実行します。 ```sh $ graph init --product hosted-service --from-example ``` -- In the case of the example, the subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- 例の場合、サブグラフはDani GrantによるGravityコントラクトに基づいており、ユーザーのアバターを管理し、アバターが作成または更新されるたびに`NewGravatar`または`UpdateGravatar`イベントを発行します。 -### 3. Write your Subgraph +### 3. サブグラフの作成 -The previous command will have created a scaffold from where you can build your subgraph. When making changes to the subgraph, you will mainly work with three files: +先ほどのコマンドで、サブグラフを作成するための足場ができました。 サブグラフに変更を加える際には、主に3つのファイルを使用します: -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraph will index -- Schema (schema.graphql) - The GraphQL schema define what data you wish to retrieve from the subgraph -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema +- マニフェスト (subgraph.yaml) - マニフェストは、サブグラフがインデックスするデータソースを定義します。 +- スキーマ (schema.graphql) - GraphQLスキーマは、サブグラフから取得したいデータを定義します。 +- AssemblyScript Mappings (mapping.ts) - データソースからのデータを、スキーマで定義されたエンティティに変換するコードです。 -For more information on how to write your subgraph, see [Create a Subgraph](/developer/create-subgraph-hosted). +サブグラフの書き方の詳細については、 [Create a Subgraph](/developer/create-subgraph-hosted) を参照してください。 -### 4. Deploy your Subgraph +### 4. サブグラフのデプロイ -- Sign into the [Hosted Service](https://thegraph.com/hosted-service/) using your github account -- Click Add Subgraph and fill out the required information. Use the same subgraph name as in step 2. -- Run codegen in the subgraph folder +- Github アカウントを使用して[Hosted Service](https://thegraph.com/hosted-service/) にサインインします。 +- 「Add Subgraph」をクリックし、必要な情報を入力します。 手順2と同じサブグラフ名を使用します。 +- サブグラフのフォルダでcodegenを実行します。 ```sh # NPM @@ -169,16 +169,16 @@ $ npm run codegen $ yarn codegen ``` -- Add your Access token and deploy your subgraph. The access token is found on your dashboard in the Hosted Service. +- アクセストークンを追加して、サブグラフをデプロイします。 アクセストークンは、ダッシュボードのHosted Serviceにあります。 ```sh $ graph auth --product hosted-service $ graph deploy --product hosted-service / ``` -### 5. Check your logs +### 5. ログの確認 -The logs should tell you if there are any errors. If your subgraph is failing, you can query the subgraph health by using the [GraphiQL Playground](https://graphiql-online.com/). Use [this endpoint](https://api.thegraph.com/index-node/graphql). Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails so you can debug accordingly: +エラーが発生した場合は、ログを確認してください。 サブグラフが失敗している場合は、 [GraphiQL Playground](https://graphiql-online.com/) を使ってサブグラフの健全性をクエリすることができます。 [このエンドポイント](https://api.thegraph.com/index-node/graphql) を使用します。 なお、以下のクエリを活用して、サブグラフのデプロイメントIDを入力することができます。 この場合、 `Qm...` がデプロイメントIDです(これはSubgraphページの**Details**に記載されています)。 以下のクエリは、サブグラフが失敗したときに教えてくれるので、適宜デバッグすることができます: ```sh { @@ -222,6 +222,6 @@ The logs should tell you if there are any errors. If your subgraph is failing, y } ``` -### 6. Query your Subgraph +### 6. サブグラフのクエリ -Follow [these instructions](/hosted-service/query-hosted-service) to query your subgraph on the Hosted Service. +[こちらの手順](/hosted-service/query-hosted-service)に従って、ホステッドサービスでサブグラフをクエリします。 From d287b5bd809213793efd035c5907ce3d08d17729 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:11 -0500 Subject: [PATCH 157/241] New translations quick-start.mdx (Chinese Simplified) --- pages/zh/developer/quick-start.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/zh/developer/quick-start.mdx b/pages/zh/developer/quick-start.mdx index 398321403236..1856aca08178 100644 --- a/pages/zh/developer/quick-start.mdx +++ b/pages/zh/developer/quick-start.mdx @@ -9,7 +9,7 @@ This guide will quickly take you through how to initialize, create, and deploy y ## 子图工作室 -### 1. Install the Graph CLI +### 1. 安装Graph CLI The Graph CLI is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. @@ -115,7 +115,7 @@ You can now query your subgraph by following [these instructions](/developer/que ## 托管服务 -### 1. Install the Graph CLI +### 1. 安装Graph CLI "The Graph CLI is an npm package and you will need `npm` or `yarn` installed to use it. From 78ffa165031bd45e96f6ceb65676d498db5be67c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:13 -0500 Subject: [PATCH 158/241] New translations deploy-subgraph-hosted.mdx (Spanish) --- .../hosted-service/deploy-subgraph-hosted.mdx | 82 +++++++++---------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/pages/es/hosted-service/deploy-subgraph-hosted.mdx b/pages/es/hosted-service/deploy-subgraph-hosted.mdx index bdc532e205e4..5b5c2dacade7 100644 --- a/pages/es/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/es/hosted-service/deploy-subgraph-hosted.mdx @@ -1,56 +1,56 @@ --- -title: Deploy a Subgraph to the Hosted Service +title: Despliega un Subgrafo en el Servicio Alojado --- -If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. +Si aún no lo has comprobado, revisa cómo escribir los archivos que componen un [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) y cómo instalar el [Graph CLI](https://github.com/graphprotocol/graph-cli) para generar el código para tu subgrafo. Ahora, es el momento de desplegar tu subgrafo en el Servicio Alojado, también conocido como Hosted Service. -## Create a Hosted Service account +## Crear una cuenta en el Servicio Alojado -Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. +Antes de utilizar el Servicio Alojado, crea una cuenta en nuestro Servicio Alojado. Para ello necesitarás una cuenta [Github](https://github.com/); si no tienes una, debes crearla primero. A continuación, navega hasta el [Hosted Service](https://thegraph.com/hosted-service/), haz clic en el botón _'Sign up with Github'_ y completa el flujo de autorización de Github. -## Store the Access Token +## Guardar el Token de Acceso -After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. +Luego de crear la cuenta, navega a tu [dashboard](https://thegraph.com/hosted-service/dashboard). Copia el token de acceso que aparece en el dashboard y ejecuta `graph auth --product hosted-service `. Esto almacenará el token de acceso en tu computadora. Sólo tienes que hacerlo una vez, o si alguna vez regeneras el token de acceso. -## Create a Subgraph on the Hosted Service +## Crear un Subgrafo en el Servicio Alojado -Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: +Antes de desplegar el subgrafo, es necesario crearlo en The Graph Explorer. Ve a [dashboard](https://thegraph.com/hosted-service/dashboard) y haz clic en el botón _'Add Subgraph'_ y completa la información siguiente según corresponda: -**Image** - Select an image to be used as a preview image and thumbnail for the subgraph. +**Image** - Selecciona una imagen que se utilizará como imagen de vista previa y miniatura para el subgrafo. -**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ +**Subgraph Name** -Junto con el nombre de la cuenta con la que se crea el subgrafo, esto también definirá el nombre de estilo `account-name/subgraph-name` utilizado para los despliegues y los endpoints de GraphQL. _Este campo no puede ser cambiado posteriormente._ -**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ +**Account** - La cuenta con la que se crea el subgrafo. Puede ser la cuenta de un individuo o de una organización. _Los Subgrafos no pueden ser movidos entre cuentas posteriormente._ -**Subtitle** - Text that will appear in subgraph cards. +**Subtitle** - Texto que aparecerá en las tarjetas del subgrafo. -**Description** - Description of the subgraph, visible on the subgraph details page. +**Description** - Descripción del subgrafo, visible en la página de detalles del subgrafo. -**GitHub URL** - Link to the subgraph repository on GitHub. +**GitHub URL** Enlace al repositorio de subgrafos en GitHub. -**Hide** - Switching this on hides the subgraph in the Graph Explorer. +**Hide** - Al activar esta opción se oculta el subgrafo en the Graph Explorer. -After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). +Después de guardar el nuevo subgrafo, se te muestra una pantalla con ayuda sobre cómo instalar the Graph CLI, cómo generar el andamiaje para un nuevo subgrafo, y cómo desplegar tu subgrafo. Los dos primeros pasos se trataron en la sección [Definir un Subgrafo](/developer/define-subgraph-hosted). -## Deploy a Subgraph on the Hosted Service +## Desplegar un Subgrupo en el Servicio Alojado -Deploying your subgraph will upload the subgraph files that you've built with `yarn build` to IPFS and tell the Graph Explorer to start indexing your subgraph using these files. +El despliegue de tu subgrafo subirá los archivos del subgrafo que has construido con `yarn build` a IPFS y le dirá a Graph Explorer que empiece a indexar tu subgrafo usando estos archivos. -You deploy the subgraph by running `yarn deploy` +El subgrafo lo despliegas ejecutando `yarn deploy` -After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. +Después de desplegar el subgrafo, The Graph Explorer pasará a mostrar el estado de sincronización de tu subgrafo. Dependiendo de la cantidad de datos y del número de eventos que haya que extraer de los bloques históricos de Ethereum, empezando por el bloque génesis, la sincronización puede tardar desde unos minutos hasta varias horas. El estado del subgrafo cambia a `Synced` una vez que the Graph Node ha extraído todos los datos de los bloques históricos. The Graph Node continuará inspeccionando los bloques de Ethereum para tu subgrafo a medida que estos bloques sean minados. -## Redeploying a Subgraph +## Re-Desplegar un Subgrafo -When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. +Cuando hagas cambios en la definición de tu subgrafo, por ejemplo para arreglar un problema en los mapeos de entidades, ejecuta de nuevo el comando `yarn deploy` anterior para desplegar la versión actualizada de tu subgrafo. Cualquier actualización de un subgrafo requiere que Graph Node reindexe todo tu subgrafo, de nuevo empezando por el bloque génesis. -If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. +Si tu subgrafo previamente desplegado está todavía en estado `Syncing`, será inmediatamente reemplazado por la nueva versión desplegada. Si el subgrafo previamente desplegado ya está completamente sincronizado, Graph Node marcará la nueva versión desplegada como `Pending Version`, la sincronizará en segundo plano, y sólo reemplazará la versión actualmente desplegada por la nueva una vez que la sincronización de la nueva versión haya terminado. Esto asegura que tienes un subgrafo con el que trabajar mientras la nueva versión se sincroniza. -### Deploying the subgraph to multiple Ethereum networks +### Desplegar el subgrafo en múltiples redes Ethereum -In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +En algunos casos, querrás desplegar el mismo subgrafo en múltiples redes Ethereum sin duplicar todo su código. El principal desafío que supone esto es que las direcciones de los contratos en estas redes son diferentes. Una solución que permite parametrizar aspectos como las direcciones de los contratos es generar partes de los mismos mediante un sistema de plantillas como [Mustache](https://mustache.github.io/) o [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: +Para ilustrar este enfoque, supongamos que un subgrafo debe desplegarse en mainnet y Ropsten utilizando diferentes direcciones de contrato. Entonces podrías definir dos archivos de configuración que proporcionen las direcciones para cada red: ```json { @@ -59,7 +59,7 @@ To illustrate this approach, let's assume a subgraph should be deployed to mainn } ``` -and +y ```json { @@ -68,7 +68,7 @@ and } ``` -Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: +Junto con eso, sustituirías el nombre de la red y las direcciones en el manifiesto con un marcador de posición variable `{{network}}` y `{{address}}` y renombra el manifiesto a e.g. `subgraph.template.yaml`: ```yaml # ... @@ -85,7 +85,7 @@ dataSources: kind: ethereum/events ``` -In order generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: +Para generar un manifiesto a cualquiera de las dos redes, podrías añadir dos comandos adicionales a `package.json` junto con una dependencia en `mustache`: ```json { @@ -102,7 +102,7 @@ In order generate a manifest to either network, you could add two additional com } ``` -To deploy this subgraph for mainnet or Ropsten you would now simply run one of the two following commands: +Para desplegar este subgrafo para mainnet o Ropsten, sólo tienes que ejecutar uno de los dos comandos siguientes: ```sh # Mainnet: @@ -112,15 +112,15 @@ yarn prepare:mainnet && yarn deploy yarn prepare:ropsten && yarn deploy ``` -A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). +Un ejemplo práctico de esto se puede encontrar [aquí](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). -**Note:** This approach can also be applied more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. +**Nota:** Este enfoque también puede aplicarse a situaciones más complejas, en las que es necesario sustituir más que las direcciones de los contratos y los nombres de las redes o en las que también se generan mapeos o ABIs a partir de plantillas. -## Checking subgraph health +## Comprobar de la fortaleza del subgrafo -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +Si un subgrafo se sincroniza con éxito, es una buena señal de que seguirá funcionando bien para siempre. Sin embargo, los nuevos disparadores en la cadena pueden hacer que tu subgrafo se encuentre con una condición de error no probada o puede empezar a retrasarse debido a problemas de rendimiento o problemas con los operadores de nodos. -Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node expone un endpoint graphql que puedes consultar para comprobar el estado de tu subgrafo. En el Servicio Alojado, está disponible en `https://api.thegraph.com/index-node/graphql`. En el nodo local está disponible por default en el puerto `8030/graphql`. El esquema completo para este endpoint se puede encontrar [aquí](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). A continuación se muestra un ejemplo de consulta que comprueba el estado de la versión actual de un subgrafo: ```graphql { @@ -147,14 +147,14 @@ Graph Node exposes a graphql endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. +Esto te dará el `chainHeadBlock` que puedes comparar con el `latestBlock` de tu subgrafo para comprobar si se está retrasando. `synced` informa si el subgrafo ha alcanzado la cadena. `health` actualmente puede tomar los valores de `healthy` si no hubo errores, o `failed` si hubo un error que detuvo el progreso del subgrafo. En este caso puedes consultar el campo `fatalError` para conocer los detalles de este error. -## Subgraph archive policy +## Política de archivos de subgrafos -The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. +El Servicio Alojado es un indexador gratuito de Graph Node. Los desarrolladores pueden desplegar subgrafos que indexen una serie de redes, que serán indexadas y estarán disponibles para su consulta a través de graphQL. -To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs which are inactive. +Para mejorar el rendimiento del servicio para los subgrafos activos, el Servicio Alojado archivará los subgrafos que estén inactivos. -**A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 30 days.** +**Un subgrafo se define como "inactivo" si se desplegó en el Servicio Alojado hace más de 45 días, y si ha recibido 0 consultas en los últimos 30 días.** -Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. +Los desarrolladores serán notificados por correo electrónico si uno de sus subgrafos ha sido marcado como inactivo 7 días antes de su eliminación. Si desean "activar" su subgrafo, pueden hacerlo realizando una consulta en el playground graphQL de su subgrafo. Los desarrolladores siempre pueden volver a desplegar un subgrafo archivado si lo necesitan de nuevo. From 14edfba6aac958692439e83ab17ecf55788ffb50 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:16 -0500 Subject: [PATCH 159/241] New translations deploy-subgraph-hosted.mdx (Chinese Simplified) --- .../hosted-service/deploy-subgraph-hosted.mdx | 82 +++++++++---------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx index bdc532e205e4..5fe5ccacae0e 100644 --- a/pages/zh/hosted-service/deploy-subgraph-hosted.mdx +++ b/pages/zh/hosted-service/deploy-subgraph-hosted.mdx @@ -1,56 +1,56 @@ --- -title: Deploy a Subgraph to the Hosted Service +title: 将子图部署到托管服务上 --- -If you have not checked out already, check out how to write the files that make up a [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) and how to install the [Graph CLI](https://github.com/graphprotocol/graph-cli) to generate code for your subgraph. Now, it's time to deploy your subgraph to the Hosted Service, also known as the Hosted Service. +如果您尚未查看,请先查看如何编写组成 [子图清单](/developer/create-subgraph-hosted#the-subgraph-manifest) 的文件以及如何安装 [Graph CLI](https://github.com/graphprotocol/graph-cli) 为您的子图生成代码。 现在,让我们将您的子图部署到托管服务上。 -## Create a Hosted Service account +## 创建托管服务帐户 -Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button and complete Github's authorization flow. +在使用托管服务之前,请先在我们的托管服务中创建一个帐户。 为此,您将需要一个 [Github](https://github.com/) 帐户;如果您还没有,您需要先创建一个账户。 然后,导航到 [托管服务](https://thegraph.com/hosted-service/), 单击 _'使用 Github 注册'_ 按钮并完成 Github 的授权流程。 -## Store the Access Token +## 存储访问令牌 -After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token. +创建帐户后,导航到您的 [仪表板](https://thegraph.com/hosted-service/dashboard)。 复制仪表板上显示的访问令牌并运行 `graph auth --product hosted-service `。 这会将访问令牌存储在您的计算机上。 如果您不需要重新生成访问令牌,您就只需要这样做一次。 -## Create a Subgraph on the Hosted Service +## 在托管服务上创建子图 -Before deploying the subgraph, you need to create it in The Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _'Add Subgraph'_ button and fill in the information below as appropriate: +在部署子图之前,您需要在 The Graph Explorer 中创建它。 转到 [仪表板](https://thegraph.com/hosted-service/dashboard) ,单击 _'添加子图'_ 按钮,并根据需要填写以下信息: -**Image** - Select an image to be used as a preview image and thumbnail for the subgraph. +**图像** - 选择要用作子图的预览图和缩略图的图像。 -**Subgraph Name** - Together with the account name that the subgraph is created under, this will also define the `account-name/subgraph-name`-style name used for deployments and GraphQL endpoints. _This field cannot be changed later._ +**子图名称** - 子图名称连同下面将要创建的子图帐户名称,将定义用于部署和 GraphQL 端点的`account-name/subgraph-name`样式名称。 _此字段以后无法更改。_ -**Account** - The account that the subgraph is created under. This can be the account of an individual or organization. _Subgraphs cannot be moved between accounts later._ +**帐户** - 创建子图的帐户。 这可以是个人或组织的帐户。 _以后不能在帐户之间移动子图。_ -**Subtitle** - Text that will appear in subgraph cards. +**副标题** - 将出现在子图卡中的文本。 -**Description** - Description of the subgraph, visible on the subgraph details page. +**描述** - 子图的描述,在子图详细信息页面上可见。 -**GitHub URL** - Link to the subgraph repository on GitHub. +**GitHub URL** - 存储在GitHub 上的子图代码的链接。 -**Hide** - Switching this on hides the subgraph in the Graph Explorer. +**隐藏** - 打开此选项可隐藏Graph Explorer中的子图。 -After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Define a Subgraph section](/developer/define-subgraph-hosted). +保存新子图后,您会看到一个屏幕,其中包含有关如何安装 Graph CLI、如何为新子图生成脚手架以及如何部署子图的帮助信息。 前面两部分在[定义子图](/developer/define-subgraph-hosted)中进行了介绍。 -## Deploy a Subgraph on the Hosted Service +## 在托管服务上部署子图 -Deploying your subgraph will upload the subgraph files that you've built with `yarn build` to IPFS and tell the Graph Explorer to start indexing your subgraph using these files. +一旦部署您的子图,您使用`yarn build` 命令构建的子图文件将被上传到 IPFS,并告诉 Graph Explorer 开始使用这些文件索引您的子图。 -You deploy the subgraph by running `yarn deploy` +您可以通过运行 `yarn deploy`来部署子图。 -After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical Ethereum blocks, starting with the genesis block, syncing can take from a few minutes to several hours. The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting Ethereum blocks for your subgraph as these blocks are mined. +部署子图后,Graph Explorer将切换到显示子图的同步状态。 根据需要从历史以太坊区块中提取的数据量和事件数量的不同,从创世区块开始,同步操作可能需要几分钟到几个小时。 一旦 Graph节点从历史区块中提取了所有数据,子图状态就会切换到`Synced`。 当新的以太坊区块出现时,Graph节点将继续按照子图的要求检查这些区块的信息。 -## Redeploying a Subgraph +## 重新部署子图 -When making changes to your subgraph definition, for example to fix a problem in the entity mappings, run the `yarn deploy` command above again to deploy the updated version of your subgraph. Any update of a subgraph requires that Graph Node reindexes your entire subgraph, again starting with the genesis block. +更改子图定义后,例如:修复实体映射中的一个问题,再次运行上面的 `yarn deploy` 命令可以部署新版本的子图。 子图的任何更新都需要Graph节点再次从创世块开始重新索引您的整个子图。 -If your previously deployed subgraph is still in status `Syncing`, it will be immediately replaced with the newly deployed version. If the previously deployed subgraph is already fully synced, Graph Node will mark the newly deployed version as the `Pending Version`, sync it in the background, and only replace the currently deployed version with the new one once syncing the new version has finished. This ensures that you have a subgraph to work with while the new version is syncing. +如果您之前部署的子图仍处于`Syncing`状态,系统则会立即将其替换为新部署的版本。 如果之前部署的子图已经完全同步,Graph节点会将新部署的版本标记为`Pending Version`,在后台进行同步,只有在新版本同步完成后,才会用新的版本替换当前部署的版本。 这样做可以确保在新版本同步时您仍然有子图可以使用。 -### Deploying the subgraph to multiple Ethereum networks +### 将子图部署到多个以太坊网络 -In some cases, you will want to deploy the same subgraph to multiple Ethereum networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. One solution that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +在某些情况下,您可能希望将相同的子图部署到多个以太坊网络,而无需复制其所有代码。 这样做的主要挑战是这些网络上的合约地址不同。 允许参数化合约地址等配置的一种解决方案是使用 [Mustache](https://mustache.github.io/)或 [Handlebars](https://handlebarsjs.com/)等模板系统。 -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Ropsten using different contract addresses. You could then define two config files providing the addresses for each network: +为了说明这种方法,我们假设使用不同的合约地址将子图部署到主网和 Ropsten上。 您可以定义两个配置文件,为每个网络提供相应的地址: ```json { @@ -59,7 +59,7 @@ To illustrate this approach, let's assume a subgraph should be deployed to mainn } ``` -and +和 ```json { @@ -68,7 +68,7 @@ and } ``` -Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: +除此之外,您可以用变量占位符 `{{network}}` 和 `{{address}}` 替换清单中的网络名称和地址,并将清单重命名为例如 `subgraph.template.yaml`: ```yaml # ... @@ -85,7 +85,7 @@ dataSources: kind: ethereum/events ``` -In order generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: +为了给每个网络生成清单,您可以向 `package.json` 添加两个附加命令,以及对 `mustache` 的依赖项: ```json { @@ -102,7 +102,7 @@ In order generate a manifest to either network, you could add two additional com } ``` -To deploy this subgraph for mainnet or Ropsten you would now simply run one of the two following commands: +要为主网或 Ropsten 部署此子图,您现在只需运行以下两个命令中的任意一个: ```sh # Mainnet: @@ -112,15 +112,15 @@ yarn prepare:mainnet && yarn deploy yarn prepare:ropsten && yarn deploy ``` -A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). +您可以在[这里](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759)找到一个工作示例。 -**Note:** This approach can also be applied more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. +**注意:** 这种方法也可以应用在更复杂的情况下,例如:需要替换的不仅仅是合约地址和网络名称,或者还需要从模板生成映射或 ABI。 -## Checking subgraph health +## 检查子图状态 -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the chain might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +如果子图成功同步,这是表明它将运行良好的一个好的信号。 但是,链上的新事件可能会导致您的子图遇到未经测试的错误环境,或者由于性能或节点方面的问题而开始落后于链上数据。 -Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph 节点公开了一个 graphql 端点,您可以通过查询该端点来检查子图的状态。 在托管服务上,该端点的链接是 `https://api.thegraph.com/index-node/graphql`。 在本地节点上,默认情况下该端点在端口 `8030/graphql` 上可用。 该端点的完整数据模式可以在[此处](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql)找到。 这是一个检查子图当前版本状态的示例查询: ```graphql { @@ -147,14 +147,14 @@ Graph Node exposes a graphql endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors ocurred, or `failed` if there was an error which halted the progress of the subgraph. In this case you can check the `fatalError` field for details on this error. +这将为您提供 `chainHeadBlock`,您可以将其与子图上的 `latestBlock` 进行比较,以检查子图是否落后。 通过`synced`,可以了解子图是否与链上数据完全同步。 如果子图没有发生错误,`health` 将返回`healthy`,如果有一个错误导致子图的同步进度停止,那么 `health`将返回`failed` 。 在这种情况下,您可以检查 `fatalError` 字段以获取有关此错误的详细信息。 -## Subgraph archive policy +## 子图归档策略 -The Hosted Service is a free Graph Node indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL. +托管服务是一个免费的Graph节点索引器。 开发人员可以部署索引一系列网络的子图,这些网络将被索引,并可以通过 graphQL 进行查询。 -To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs which are inactive. +为了提高活跃子图的服务性能,托管服务将归档不活跃的子图。 -**A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 30 days.** +**如果一个子图在 45 天前部署到托管服务,并且在过去 30 天内收到 0 个查询,则将其定义为“不活跃”。** -Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again. +如果开发人员的一个子图被标记为不活跃,并将 7 天后被删除,托管服务会通过电子邮件通知开发者。 如果他们希望“激活”他们的子图,他们可以通过在其子图的托管服务 graphQL playground中发起查询来实现。 如果再次需要使用这个子图,开发人员也可以随时重新部署存档的子图。 From 622c6ef2b6092f871dd4010fa10597af20b3f035 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:17 -0500 Subject: [PATCH 160/241] New translations migrating-subgraph.mdx (Spanish) --- pages/es/hosted-service/migrating-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/hosted-service/migrating-subgraph.mdx b/pages/es/hosted-service/migrating-subgraph.mdx index 85f72f053b30..eda54d1931ed 100644 --- a/pages/es/hosted-service/migrating-subgraph.mdx +++ b/pages/es/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## Introduction +## Introducción This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From f6b6f806cae03b2d43b07c4bac2583b37464cea7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:18 -0500 Subject: [PATCH 161/241] New translations migrating-subgraph.mdx (Arabic) --- pages/ar/hosted-service/migrating-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ar/hosted-service/migrating-subgraph.mdx b/pages/ar/hosted-service/migrating-subgraph.mdx index 85f72f053b30..9f314e8e9034 100644 --- a/pages/ar/hosted-service/migrating-subgraph.mdx +++ b/pages/ar/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## Introduction +## مقدمة This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From 9c5f97ec021e104061190796e6b7521529c2fb06 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:19 -0500 Subject: [PATCH 162/241] New translations migrating-subgraph.mdx (Japanese) --- pages/ja/hosted-service/migrating-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ja/hosted-service/migrating-subgraph.mdx b/pages/ja/hosted-service/migrating-subgraph.mdx index 85f72f053b30..8d556f5644db 100644 --- a/pages/ja/hosted-service/migrating-subgraph.mdx +++ b/pages/ja/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## Introduction +## イントロダクション This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From 923e28eb2e753797058c4614fb7d0cfe2861dc0c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:23 -0500 Subject: [PATCH 163/241] New translations graphql-api.mdx (Spanish) --- pages/es/developer/graphql-api.mdx | 120 ++++++++++++++--------------- 1 file changed, 60 insertions(+), 60 deletions(-) diff --git a/pages/es/developer/graphql-api.mdx b/pages/es/developer/graphql-api.mdx index f9cb6214fcd9..4513e9f5c724 100644 --- a/pages/es/developer/graphql-api.mdx +++ b/pages/es/developer/graphql-api.mdx @@ -1,16 +1,16 @@ --- -title: GraphQL API +title: API GraphQL --- -This guide explains the GraphQL Query API that is used for the Graph Protocol. +Esta guía explica la API de consulta GraphQL que se utiliza para the Graph Protocol. -## Queries +## Consultas -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +En tu esquema de subgrafos defines tipos llamados `Entities`. Por cada tipo de `Entity`, se generará un campo `entity` y `entities` en el nivel superior del tipo `Query`. Ten en cuenta que no es necesario incluir `query` en la parte superior de la consulta `graphql` cuando se utiliza The Graph. -#### Examples +#### Ejemplos -Query for a single `Token` entity defined in your schema: +Consulta de una única entidad `Token` definida en tu esquema: ```graphql { @@ -21,9 +21,9 @@ Query for a single `Token` entity defined in your schema: } ``` -**Note:** When querying for a single entity, the `id` field is required and it must be a string. +**Nota:** Cuando se consulta una sola entidad, el campo `id` es obligatorio y debe ser un string. -Query all `Token` entities: +Consulta todas las entidades `Token`: ```graphql { @@ -34,11 +34,11 @@ Query all `Token` entities: } ``` -### Sorting +### Clasificación -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +Al consultar una colección, el parámetro `orderBy` puede utilizarse para ordenar por un atributo específico. Además, el `orderDirection` se puede utilizar para especificar la dirección de ordenación, `asc` para ascendente o `desc` para descendente. -#### Example +#### Ejemplo ```graphql { @@ -49,17 +49,17 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe } ``` -### Pagination +### Paginación -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. +Al consultar una colección, el parámetro `first` puede utilizarse para paginar desde el principio de la colección. Cabe destacar que el orden por defecto es por ID en orden alfanumérico ascendente, no por tiempo de creación. -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +Además, el parámetro `skip` puede utilizarse para saltar entidades y paginar. por ejemplo, `first:100` muestra las primeras 100 entidades y `first:100, skip:100` muestra las siguientes 100 entidades. -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +Las consultas deben evitar el uso de valores de `skip` muy grandes, ya que suelen tener un rendimiento deficiente. Para recuperar un gran número de elementos, es mucho mejor para paginar recorrer las entidades basándose en un atributo, como se muestra en el último ejemplo. -#### Example +#### Ejemplo -Query the first 10 tokens: +Consulta los primeros 10 tokens: ```graphql { @@ -70,11 +70,11 @@ Query the first 10 tokens: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Para consultar grupos de entidades en medio de una colección, el parámetro `skip` puede utilizarse junto con el parámetro `first` para omitir un número determinado de entidades empezando por el principio de la colección. -#### Example +#### Ejemplo -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Consulta 10 entidades `Token`, desplazadas 10 lugares desde el principio de la colección: ```graphql { @@ -85,9 +85,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example +#### Ejemplo -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +Si un cliente necesita recuperar un gran número de entidades, es mucho más eficaz basar las consultas en un atributo y filtrar por ese atributo. Por ejemplo, un cliente podría recuperar un gran número de tokens utilizando esta consulta: ```graphql { @@ -100,15 +100,15 @@ If a client needs to retrieve a large number of entities, it is much more perfor } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +La primera vez, enviaría la consulta con `lastID = ""`, y para las siguientes peticiones pondría `lastID` al atributo `id` de la última entidad de la petición anterior. Este enfoque tendrá un rendimiento significativamente mejor que el uso de valores crecientes de `skip`. -### Filtering +### Filtro -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +Puedes utilizar el parámetro `where` en tus consultas para filtrar por diferentes propiedades. Puedes filtrar por múltiples valores dentro del parámetro `where`. -#### Example +#### Ejemplo -Query challenges with `failed` outcome: +Desafíos de consulta con resultado `failed`: ```graphql { @@ -122,9 +122,9 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +Puede utilizar sufijos como `_gt`, `_lte` para la comparación de valores: -#### Example +#### Ejemplo ```graphql { @@ -136,7 +136,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: } ``` -Full list of parameter suffixes: +Lista completa de sufijos de parámetros: ```graphql _not @@ -154,17 +154,17 @@ _not_starts_with _not_ends_with ``` -Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. +Ten en cuenta que algunos sufijos sólo son compatibles con determinados tipos. Por ejemplo, `Boolean` solo admite `_not`, `_in`, y `_not_in`. -### Time-travel queries +### Consultas sobre Time-travel -You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Puedes consultar el estado de tus entidades no sólo para el último bloque, que es el predeterminado, sino también para un bloque arbitrario en el pasado. El bloque en el que debe producirse una consulta puede especificarse por su número de bloque o su hash de bloque incluyendo un argumento `block` en los campos de nivel superior de las consultas. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +El resultado de una consulta de este tipo no cambiará con el tiempo, es decir, la consulta en un determinado bloque pasado devolverá el mismo resultado sin importar cuándo se ejecute, con la excepción de que si se consulta en un bloque muy cercano al encabezado de la cadena de Ethereum, el resultado podría cambiar si ese bloque resulta no estar en la cadena principal y la cadena se reorganiza. Una vez que un bloque puede considerarse definitivo, el resultado de la consulta no cambiará. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +Ten en cuenta que la implementación está sujeta a ciertas limitaciones que podrían violar estas garantías. La implementación no siempre puede decir que un hash de bloque dado no está en la cadena principal en absoluto, o que el resultado de una consulta por hash de bloque para un bloque que no puede considerarse final todavía podría estar influenciado por una reorganización de bloque que se ejecuta simultáneamente con la consulta. No afectan a los resultados de las consultas por el hash del bloque cuando éste es definitivo y se sabe que está en la cadena principal. [ Esta cuestión](https://github.com/graphprotocol/graph-node/issues/1405) explica con detalle cuáles son estas limitaciones. -#### Example +#### Ejemplo ```graphql { @@ -178,9 +178,9 @@ Note that the current implementation is still subject to certain limitations tha } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Esta consulta devolverá las entidades `Challenge`, y sus entidades asociadas `Application`, tal y como existían directamente después de procesar el bloque número 8.000.000. -#### Example +#### Ejemplo ```graphql { @@ -194,26 +194,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Esta consulta devolverá las entidades `Challenge`, y sus entidades asociadas `Application`, tal y como existían directamente después de procesar el bloque con el hash dado. -### Fulltext Search Queries +### Consultas de Búsqueda de Texto Completo -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Los campos de consulta de búsqueda de texto completo proporcionan una API de búsqueda de texto expresiva que puede añadirse al esquema de subgrafos y personalizarse. Consulta [Definiendo los campos de búsqueda de texto completo](/developer/create-subgraph-hosted#defining-fulltext-search-fields) para añadir la búsqueda de texto completo a tu subgrafo. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Las consultas de búsqueda de texto completo tienen un campo obligatorio, `text`, para suministrar los términos de búsqueda. Hay varios operadores especiales de texto completo que se pueden utilizar en este campo de búsqueda de `text`. -Fulltext search operators: +Operadores de búsqueda de texto completo: -| Symbol | Operator | Description | -| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Símbolo | Operador | Descripción | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Para combinar varios términos de búsqueda en un filtro para entidades que incluyen todos los términos proporcionados | +| | | `Or` | Las consultas con varios términos de búsqueda separados por o el operador devolverá todas las entidades que coincidan con cualquiera de los términos proporcionados | +| `<->` | `Follow by` | Especifica la distancia entre dos palabras. | +| `:*` | `Prefix` | Utilice el término de búsqueda del prefijo para encontrar palabras cuyo prefijo coincida (se requieren 2 caracteres.) | -#### Examples +#### Ejemplos -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +Utilizando el operador `or`, esta consulta filtrará las entidades del blog que tengan variaciones de "anarchism" o de "crumpet" en sus campos de texto completo. ```graphql { @@ -226,7 +226,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +El operador `follow by` especifica unas palabras a una distancia determinada en los documentos de texto completo. La siguiente consulta devolverá todos los blogs con variaciones de "decentralize" seguidas de "philosophy" ```graphql { @@ -239,7 +239,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +Combina los operadores de texto completo para crear filtros más complejos. Con un operador de búsqueda de pretexto combinado con un follow by esta consulta de ejemplo coincidirá con todas las entidades del blog con palabras que empiecen por "lou" seguidas de "music". ```graphql { @@ -252,16 +252,16 @@ Combine fulltext operators to make more complex filters. With a pretext search o } ``` -## Schema +## Esquema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +El esquema de tu fuente de datos, es decir, los tipos de entidad, los valores y las relaciones que están disponibles para la consulta, se definen a través del [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +Los esquemas de GraphQL suelen definir tipos raíz para `queries`, `subscriptions` y `mutations`. The Graph solo admite `queries`. El tipo de `Query` raíz de tu subgrafo se genera automáticamente a partir del esquema GraphQL que se incluye en el manifiesto de tu subgrafo. -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> **Nota:** Nuestra API no expone mutaciones porque se espera que los desarrolladores emitan transacciones directamente contra la blockchain subyacente desde sus aplicaciones. -### Entities +### Entidades -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Todos los tipos GraphQL con directivas `@entity` en tu esquema serán tratados como entidades y deben tener un campo `ID`. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Nota:** Actualmente, todos los tipos de tu esquema deben tener una directiva `@entity`. En el futuro, trataremos los tipos sin una directiva `@entity` como objetos de valor, pero esto todavía no está soportado. From 90b0c8ffd5ad189ed184bea3271af941a50902a2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:24 -0500 Subject: [PATCH 164/241] New translations graphql-api.mdx (Arabic) --- pages/ar/developer/graphql-api.mdx | 116 ++++++++++++++--------------- 1 file changed, 58 insertions(+), 58 deletions(-) diff --git a/pages/ar/developer/graphql-api.mdx b/pages/ar/developer/graphql-api.mdx index f9cb6214fcd9..d6771fd72547 100644 --- a/pages/ar/developer/graphql-api.mdx +++ b/pages/ar/developer/graphql-api.mdx @@ -2,15 +2,15 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for the Graph Protocol. +يشرح هذا الدليل GraphQL Query API المستخدمة في بروتوكول Graph. -## Queries +## الاستعلامات -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +في مخطط الـ subgraph الخاص بك ، يمكنك تعريف أنواع وتسمى `Entities`. لكل نوع من `Entity` ، سيتم إنشاء حقل `entity` و `entities` في المستوى الأعلى من نوع `Query`. لاحظ أنه لا يلزم تضمين `query` أعلى استعلام `graphql` عند استخدام The Graph. -#### Examples +#### أمثلة -Query for a single `Token` entity defined in your schema: +الاستعلام عن كيان `Token` واحد معرف في مخططك: ```graphql { @@ -21,9 +21,9 @@ Query for a single `Token` entity defined in your schema: } ``` -**Note:** When querying for a single entity, the `id` field is required and it must be a string. +** ملاحظة: ** عند الاستعلام عن كيان واحد ، فإن الحقل `id` يكون مطلوبا ويجب أن يكون string. -Query all `Token` entities: +الاستعلام عن جميع كيانات `Token`: ```graphql { @@ -34,11 +34,11 @@ Query all `Token` entities: } ``` -### Sorting +### الفرز -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +عند الاستعلام عن مجموعة ، يمكن استخدام البارامتر `orderBy` للترتيب حسب صفة معينة. بالإضافة إلى ذلك ، يمكن استخدام `OrderDirection` لتحديد اتجاه الفرز ،`asc` للترتيب التصاعدي أو `desc` للترتيب التنازلي. -#### Example +#### مثال ```graphql { @@ -49,17 +49,17 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe } ``` -### Pagination +### ترقيم الصفحات -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. +عند الاستعلام عن مجموعة ، يمكن استخدام البارامتر `first` لترقيم الصفحات من بداية المجموعة. من الجدير بالذكر أن ترتيب الفرز الافتراضي يكون حسب الـ ID بترتيب رقمي تصاعدي ، وليس حسب وقت الإنشاء. -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +علاوة على ذلك ، يمكن استخدام البارامتر `skip` لتخطي الكيانات وترقيم الصفحات. على سبيل المثال `first:100` يعرض أول 100 عنصر و `first:100, skip:100` يعرض 100 عنصر التالية. -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +الاستعلامات يجب أن تتجنب استخدام قيم `skip` كبيرة جدا نظرا لأنها تؤدي بشكل عام أداء ضعيفا. لجلب عدد كبير من العناصر ، من الأفضل تصفح الكيانات بناء على صفة كما هو موضح في المثال الأخير. -#### Example +#### مثال -Query the first 10 tokens: +استعلم عن أول 10 توكن: ```graphql { @@ -70,11 +70,11 @@ Query the first 10 tokens: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +للاستعلام عن مجموعات الكيانات في منتصف المجموعة ، يمكن استخدام البارامتر `skip` بالاصافة لبارامتر `first` لتخطي عدد محدد من الكيانات بدءا من بداية المجموعة. -#### Example +#### مثال -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +الاستعلام عن 10 كيانات `Token` ،بإزاحة 10 أماكن من بداية المجموعة: ```graphql { @@ -85,9 +85,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example +#### مثال -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +إذا احتاج العميل إلى جلب عدد كبير من الكيانات ، فمن الأفضل أن تستند الاستعلامات إلى إحدى الصفات والفلترة حسب تلك الصفة. على سبيل المثال ، قد يجلب العميل عددا كبيرا من التوكن باستخدام هذا الاستعلام: ```graphql { @@ -100,15 +100,15 @@ If a client needs to retrieve a large number of entities, it is much more perfor } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +في المرة الأولى ، سيتم إرسال الاستعلام مع `lastID = ""` ، وبالنسبة للطلبات اللاحقة ، سيتم تعيين `lastID` إلى صفة `id` للكيان الأخير في الطلب السابق. أداء هذا الأسلوب أفضل بكثير من استخدام زيادة قيم `skip`. -### Filtering +### الفلترة -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +يمكنك استخدام البارامتر `where` في الاستعلام لتصفية الخصائص المختلفة. يمكنك الفلترة على قيم متعددة ضمن البارامتر `where`. -#### Example +#### مثال -Query challenges with `failed` outcome: +تحديات الاسعلام مع نتيجة `failed`: ```graphql { @@ -122,9 +122,9 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +يمكنك استخدام لواحق مثل `_gt` ، `_lte` لمقارنة القيم: -#### Example +#### مثال ```graphql { @@ -136,7 +136,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: } ``` -Full list of parameter suffixes: +القائمة الكاملة للواحق البارامترات: ```graphql _not @@ -154,17 +154,17 @@ _not_starts_with _not_ends_with ``` -Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`. +يرجى ملاحظة أن بعض اللواحق مدعومة فقط لأنواع معينة. على سبيل المثال ، `Boolean` يدعم فقط `_not` و `_in` و `_not_in`. ### Time-travel queries -You can query the state of your entities not just for the latest block, which is the by default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +يمكنك الاستعلام عن حالة الكيانات الخاصة بك ليس فقط للكتلة الأخيرة ، والتي هي افتراضيا ، ولكن أيضا لكتلة اعتباطية في الماضي. يمكن تحديد الكتلة التي يجب أن يحدث فيها الاستعلام إما عن طريق رقم الكتلة أو hash الكتلة الخاص بها عن طريق تضمين وسيطة `block` في حقول المستوى الأعلى للاستعلامات. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the Ethereum chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +لن تتغير نتيجة مثل هذا الاستعلام بمرور الوقت ، أي أن الاستعلام في كتلة سابقة معينة سيعيد نفس النتيجة بغض النظر عن وقت تنفيذها ، باستثناء أنه إذا قمت بالاستعلام في كتلة قريبة جدا من رأس سلسلة Ethereum ، قد تتغير النتيجة إذا تبين أن هذه الكتلة ليست في السلسلة الرئيسية وتمت إعادة تنظيم السلسلة. بمجرد اعتبار الكتلة نهائية ، لن تتغير نتيجة الاستعلام. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +لاحظ أن التنفيذ الحالي لا يزال يخضع لقيود معينة قد تنتهك هذه الضمانات. لا يمكن للتنفيذ دائما أن يخبرنا أن hash كتلة معينة ليست في السلسلة الرئيسية ، أو أن نتيجة استعلام لكتلة عن طريق hash الكتلة لا يمكن اعتبارها نهائية ومع ذلك قد تتأثر بإعادة تنظيم الكتلة التي تعمل بشكل متزامن مع الاستعلام. لا تؤثر نتائج الاستعلامات عن طريق hash الكتلة عندما تكون الكتلة نهائية ومعروفة بأنها موجودة في السلسلة الرئيسية. [ تشرح هذه المشكلة ](https://github.com/graphprotocol/graph-node/issues/1405) ماهية هذه القيود بالتفصيل. -#### Example +#### مثال ```graphql { @@ -178,9 +178,9 @@ Note that the current implementation is still subject to certain limitations tha } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +سيعود هذا الاستعلام بكيانات `Challenge` وكيانات `Application` المرتبطة بها ، كما كانت موجودة مباشرة بعد معالجة رقم الكتلة 8،000،000. -#### Example +#### مثال ```graphql { @@ -194,26 +194,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +سيعود هذا الاستعلام بكيانات `Challenge` وكيانات `Application` المرتبطة بها ، كما كانت موجودة مباشرة بعد معالجة الكتلة باستخدام hash المحددة. -### Fulltext Search Queries +### استعلامات بحث النص الكامل -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developer/create-subgraph-hosted#defining-fulltext-search-fields) to add fulltext search to your subgraph. +حقول استعلام البحث عن نص كامل توفر API للبحث عن نص تعبيري يمكن إضافتها إلى مخطط الـ subgraph وتخصيصها. راجع [ تعريف حقول بحث النص الكامل ](/developer/create-subgraph-hosted#defining-fulltext-search-fields) لإضافة بحث نص كامل إلى الـ subgraph الخاص بك. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +استعلامات البحث عن النص الكامل لها حقل واحد مطلوب ، وهو `text` ، لتوفير عبارة البحث. تتوفر العديد من عوامل النص الكامل الخاصة لاستخدامها في حقل البحث `text`. -Fulltext search operators: +عوامل تشغيل البحث عن النص الكامل: -| Symbol | Operator | Description | -| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| رمز | عامل التشغيل | الوصف | +| ----------- | ------------ | --------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة | +| | | `Or` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة | +| `<->` | `Follow by` | يحدد المسافة بين كلمتين. | +| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) | -#### Examples +#### أمثلة -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +باستخدام العامل `or` ، سيقوم الاستعلام هذا بتصفية blog الكيانات التي تحتوي على أشكال مختلفة من "anarchism" أو "crumpet" في حقول النص الكامل الخاصة بها. ```graphql { @@ -226,7 +226,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +العامل `follow by` يحدد الكلمات بمسافة محددة عن بعضها في مستندات النص-الكامل. الاستعلام التالي سيعيد جميع الـ blogs التي تحتوي على أشكال مختلفة من "decentralize" متبوعة بكلمة "philosophy" ```graphql { @@ -239,7 +239,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +اجمع بين عوامل تشغيل النص-الكامل لعمل فلترة أكثر تعقيدا. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". ```graphql { @@ -252,16 +252,16 @@ Combine fulltext operators to make more complex filters. With a pretext search o } ``` -## Schema +## المخطط -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +يتم تعريف مخطط مصدر البيانات الخاص بك - أي أنواع الكيانات والقيم والعلاقات المتاحة للاستعلام - من خلال [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +مخططات GraphQL تعرف عموما أنواع الجذر لـ `queries`, و `subscriptions` و`mutations`. The Graph يدعم فقط `queries`. يتم إنشاء نوع الجذر `Query` لـ subgraph تلقائيا من مخطط GraphQL المضمن في subgraph manifest الخاص بك. -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> ** ملاحظة: ** الـ API الخاصة بنا لا تعرض الـ mutations لأنه يُتوقع من المطورين إصدار إجراءات مباشرة لـblockchain الأساسي من تطبيقاتهم. -### Entities +### الكيانات -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +سيتم التعامل مع جميع أنواع GraphQL التي تحتوي على توجيهات `entity@` في مخططك على أنها كيانات ويجب أن تحتوي على حقل `ID`. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> ** ملاحظة: ** في الوقت الحالي ، يجب أن تحتوي جميع الأنواع في مخططك على توجيه `entity@`. في المستقبل ، سنتعامل مع الأنواع التي لا تحتوي على التوجيه `entity@` ككائنات، لكن هذا غير مدعوم حتى الآن. From cd6c1a8999c965a198e4efc1185eb9ce0b40a8ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:27 -0500 Subject: [PATCH 165/241] New translations matchstick.mdx (Spanish) --- pages/es/developer/matchstick.mdx | 88 +++++++++++++++---------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/pages/es/developer/matchstick.mdx b/pages/es/developer/matchstick.mdx index 3cf1ec761bb9..2cd0e327579d 100644 --- a/pages/es/developer/matchstick.mdx +++ b/pages/es/developer/matchstick.mdx @@ -1,16 +1,16 @@ --- -title: Unit Testing Framework +title: Marco de Unit Testing --- -Matchstick is a unit testing framework, developed by [LimeChain](https://limechain.tech/), that enables subgraph developers to test their mapping logic in a sandboxed environment and deploy their subgraphs with confidence! +Matchstick es un marco de unit testing, desarrollado por [LimeChain](https://limechain.tech/), que permite a los desarrolladores de subgrafos probar su lógica de mapeo en un entorno sandbox y desplegar sus subgrafos con confianza! -Follow the [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) to install. Now, you can move on to writing your first unit test. +Sigue la [Matchstick installation guide](https://github.com/LimeChain/matchstick/blob/main/README.md#quick-start-) para instalar. Ahora, puede pasar a escribir tu primera unit test. -## Write a Unit Test +## Escribe una Unit Test -Let's see how a simple unit test would look like, using the Gravatar [Example Subgraph](https://github.com/graphprotocol/example-subgraph). +Veamos cómo sería una unit test sencilla, utilizando el Gravatar [Example Subgraph](https://github.com/graphprotocol/example-subgraph). -Assuming we have the following handler function (along with two helper functions to make our life easier): +Suponiendo que tenemos la siguiente función handler (junto con dos funciones de ayuda para facilitarnos la vida): ```javascript export function handleNewGravatar(event: NewGravatar): void { @@ -61,7 +61,7 @@ export function createNewGravatarEvent( } ``` -We first have to create a test file in our project. We have chosen the name `gravity.test.ts`. In the newly created file we need to define a function named `runTests()`. It is important that the function has that exact name. This is an example of how our tests might look like: +Primero tenemos que crear un archivo de prueba en nuestro proyecto. Hemos elegido el nombre `gravity.test.ts`. En el archivo recién creado tenemos que definir una función llamada `runTests()`. Es importante que la función tenga ese nombre exacto. Este es un ejemplo de cómo podrían ser nuestras pruebas: ```typescript import { clearStore, test, assert } from 'matchstick-as/assembly/index' @@ -95,27 +95,27 @@ export function runTests(): void { } ``` -That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: +¡Es mucho para desempacar! En primer lugar, una cosa importante a notar es que estamos importando cosas de `matchstick-as`, nuestra biblioteca de ayuda de AssemblyScript (distribuida como un módulo npm). Puedes encontrar el repositorio [aquí](https://github.com/LimeChain/matchstick-as). `matchstick-as` nos proporciona útiles métodos de prueba y también define la función `test()` que utilizaremos para construir nuestros bloques de prueba. El resto es bastante sencillo: esto es lo que ocurre: -- We're setting up our initial state and adding one custom Gravatar entity; -- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; -- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; -- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; -- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. +- Estamos configurando nuestro estado inicial y añadiendo una entidad Gravatar personalizada; +- Definimos dos objetos de evento `NewGravatar` junto con sus datos, utilizando la función `createNewGravatarEvent()`; +- Estamos llamando a los métodos handlers de esos eventos - `handleNewGravatars()` y pasando la lista de nuestros eventos personalizados; +- Hacemos valer el estado del almacén. ¿Cómo funciona eso? - Pasamos una combinación única de tipo de Entidad e id. A continuación, comprobamos un campo específico de esa Entidad y afirmamos que tiene el valor que esperamos que tenga. Hacemos esto tanto para la Entidad Gravatar inicial que añadimos al almacén, como para las dos entidades Gravatar que se añaden cuando se llama a la función del handler; +- Y por último - estamos limpiando el almacén usando `clearStore()` para que nuestra próxima prueba pueda comenzar con un objeto almacén fresco y vacío. Podemos definir tantos bloques de prueba como queramos. -There we go - we've created our first test! 👏 +Ya está: ¡hemos creado nuestra primera prueba! 👏 -❗ **IMPORTANT:** _In order for the tests to work, we need to export the `runTests()` function in our mappings file. It won't be used there, but the export statement has to be there so that it can get picked up by Rust later when running the tests._ +❗ **IMPORTANTE:** _ Para que las pruebas funcionen, necesitamos exportar la función `runTests()` en nuestro archivo de mapeo. No se utilizará allí, pero la declaración de exportación tiene que estar allí para que pueda ser recogida por Rust más tarde al ejecutar las pruebas._ -You can export the tests wrapper function in your mappings file like this: +Puedes exportar la función wrapper de las pruebas en tu archivo de mapeo de la siguiente manera: ``` export { runTests } from "../tests/gravity.test.ts"; ``` -❗ **IMPORTANT:** _Currently there's an issue with using Matchstick when deploying your subgraph. Please only use Matchstick for local testing, and remove/comment out this line (`export { runTests } from "../tests/gravity.test.ts"`) once you're done. We expect to resolve this issue shortly, sorry for the inconvenience!_ +❗ **IMPORTANTE:** _Actualmente hay un problema con el uso de Matchstick cuando se despliega tu subgrafo. Por favor, sólo usa Matchstick para pruebas locales, y elimina/comenta esta línea (`export { runTests } de "../tests/gravity.test.ts"`) una vez que hayas terminado. Esperamos resolver este problema en breve, ¡disculpa las molestias!_ -_If you don't remove that line, you will get the following error message when attempting to deploy your subgraph:_ +_Si no eliminas esa línea, obtendrás el siguiente mensaje de error al intentar desplegar tu subgrafo:_ ``` /... @@ -123,28 +123,28 @@ Mapping terminated before handling trigger: oneshot canceled .../ ``` -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Ahora, para ejecutar nuestras pruebas, sólo tienes que ejecutar lo siguiente en la carpeta raíz de tu subgrafo: `graph test Gravity` -And if all goes well you should be greeted with the following: +Y si todo va bien deberías ser recibido con lo siguiente: -![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) +![Matchstick diciendo "¡Todas las pruebas superadas!"](/img/matchstick-tests-passed.png) -## Common test scenarios +## Escenarios de prueba comunes -### Hydrating the store with a certain state +### Hidratar la tienda con un cierto estado -Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: +Los usuarios pueden hidratar la tienda con un conjunto conocido de entidades. Aquí hay un ejemplo para inicializar la tienda con una entidad Gravatar: ```typescript let gravatar = new Gravatar('entryId') gravatar.save() ``` -### Calling a mapping function with an event +### Llamada a una función de mapeo con un evento -A user can create a custom event and pass it to a mapping function that is bound to the store: +Un usuario puede crear un evento personalizado y pasarlo a una función de mapeo que está vinculada a la tienda: ```typescript import { store } from 'matchstick-as/assembly/store' @@ -156,9 +156,9 @@ let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01 handleNewGravatar(newGravatarEvent) ``` -### Calling all of the mappings with event fixtures +### Llamar a todos los mapeos con fixtures de eventos -Users can call the mappings with test fixtures. +Los usuarios pueden llamar a los mapeos con fixtures de prueba. ```typescript import { NewGravatar } from '../../generated/Gravity/Gravity' @@ -180,9 +180,9 @@ export function handleNewGravatars(events: NewGravatar[]): void { } ``` -### Mocking contract calls +### Simular llamadas de contratos -Users can mock contract calls: +Los usuarios pueden simular las llamadas de los contratos: ```typescript import { addMetadata, assert, createMockedFunction, clearStore, test } from 'matchstick-as/assembly/index' @@ -202,9 +202,9 @@ let result = gravity.gravatarToOwner(bigIntParam) assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum.Value.fromAddress(result)) ``` -As demonstrated, in order to mock a contract call and hardcore a return value, the user must provide a contract address, function name, function signature, an array of arguments, and of course - the return value. +Como se ha demostrado, para simular (mock) una llamada a un contrato y endurecer un valor de retorno, el usuario debe proporcionar una dirección de contrato, el nombre de la función, la firma de la función, una array de argumentos y, por supuesto, el valor de retorno. -Users can also mock function reverts: +Los usuarios también pueden simular las reversiones de funciones: ```typescript let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') @@ -213,9 +213,9 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri .reverts() ``` -### Asserting the state of the store +### Afirmar el estado del almacén -Users are able to assert the final (or midway) state of the store through asserting entities. In order to do this, the user has to supply an Entity type, the specific ID of an Entity, a name of a field on that Entity, and the expected value of the field. Here's a quick example: +Los usuarios pueden hacer una aserción al estado final (o intermedio) del almacén a través de entidades de aserción. Para ello, el usuario tiene que suministrar un tipo de Entidad, el ID específico de una Entidad, el nombre de un campo en esa Entidad y el valor esperado del campo. Aquí hay un ejemplo rápido: ```typescript import { assert } from 'matchstick-as/assembly/index' @@ -227,11 +227,11 @@ gravatar.save() assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') ``` -Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. +Al ejecutar la función assert.fieldEquals() se comprobará la igualdad del campo dado con el valor esperado dado. La prueba fallará y se emitirá un mensaje de error si los valores son **NO** iguales. En caso contrario, la prueba pasará con éxito. -### Interacting with Event metadata +### Interacción con los metadatos de los Eventos -Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: +Los usuarios pueden utilizar los metadatos de la transacción por defecto, que podrían ser devueltos como un ethereum.Event utilizando la función `newMockEvent()`. El siguiente ejemplo muestra cómo se puede leer/escribir en esos campos del objeto Evento: ```typescript // Read @@ -242,26 +242,26 @@ let UPDATED_ADDRESS = '0xB16081F360e3847006dB660bae1c6d1b2e17eC2A' newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) ``` -### Asserting variable equality +### Afirmar la igualdad de las variables ```typescript assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); ``` -### Asserting that an Entity is **not** in the store +### Afirmar que una Entidad es **no** en el almacén -Users can assert that an entity does not exist in the store. The function takes an entity type and an id. If the entity is in fact in the store, the test will fail with a relevant error message. Here's a quick example of how to use this functionality: +Los usuarios pueden afirmar que una entidad no existe en el almacén. La función toma un tipo de entidad y un id. Si la entidad está de hecho en el almacén, la prueba fallará con un mensaje de error relevante. Aquí hay un ejemplo rápido de cómo utilizar esta funcionalidad: ```typescript assert.notInStore('Gravatar', '23') ``` -### Test run time duration in the log output +### Duración del tiempo de ejecución de la prueba en la salida del registro -The log output includes the test run duration. Here's an example: +La salida del registro incluye la duración de la prueba. Aquí hay un ejemplo: `Jul 09 14:54:42.420 INFO Program execution time: 10.06022ms` -## Feedback +## Comentarios -If you have any questions, feedback, feature requests or just want to reach out, the best place would be The Graph Discord where we have a dedicated channel for Matchstick, called 🔥| unit-testing. +Si tienes alguna pregunta, comentario, petición de características o simplemente quieres ponerte en contacto, el mejor lugar sería The Graph Discord, donde tenemos un canal dedicado a Matchstick, llamado 🔥| unit-testing. From c3ee41fb96ba94df543a4390403a915fbd105f51 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:28 -0500 Subject: [PATCH 166/241] New translations query-the-graph.mdx (Japanese) --- pages/ja/developer/query-the-graph.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/ja/developer/query-the-graph.mdx b/pages/ja/developer/query-the-graph.mdx index ae480b1e6883..5be6824eaafa 100644 --- a/pages/ja/developer/query-the-graph.mdx +++ b/pages/ja/developer/query-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Query The Graph +title: グラフのクエリ --- -With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +サブグラフがデプロイされた状態で、[Graph Explorer](https://thegraph.com/explorer)にアクセスすると、[GraphiQL](https://github.com/graphql/graphiql)インターフェースが表示され、サブグラフにデプロイされた GraphQL API を探索して、クエリを発行したり、スキーマを表示したりすることができます。 -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +以下に例を示しますが、サブグラフのエンティティへのクエリの方法については、[Query API](/developer/graphql-api)を参照してください。 -#### Example +#### 例 -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +このクエリは、マッピングが作成したすべてのカウンターを一覧表示します。 作成するのは 1 つだけなので、結果には 1 つの`デフォルトカウンター ```graphql { @@ -19,14 +19,14 @@ This query lists all the counters our mapping has created. Since we only create } ``` -## Using The Graph Explorer +## グラフエクスプローラの利用 -Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. +分散型グラフエクスプローラに公開されているサブグラフには、それぞれ固有のクエリ URL が設定されており、サブグラフの詳細ページに移動し、右上の「クエリ」ボタンをクリックすることで確認できます。 これは、サブグラフの詳細ページに移動し、右上の「クエリ」ボタンをクリックすると、サブグラフの固有のクエリ URL と、そのクエリの方法を示すサイドペインが表示されます。 ![Query Subgraph Pane](/img/query-subgraph-pane.png) -As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). +お気づきのように、このクエリ URL には固有の API キーを使用する必要があります。 API キーの作成と管理は、[Subgraph Studio](https://thegraph.com/studio)の「API Keys」セクションで行うことができます。 Subgraph Studio の使用方法については、[こちら](/studio/subgraph-studio)をご覧ください。 -Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). +API キーを使用してサブグラフをクエリすると、GRT で支払われるクエリ料金が発生します。 課金については[こちら](/studio/billing)をご覧ください。 -You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. +また、「プレイグラウンド」タブの GraphQL プレイグラウンドを使用して、The Graph Explorer 内のサブグラフに問い合わせを行うことができます。 From c75b306cc31a330b07c7d068db02fe79bf287be6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:30 -0500 Subject: [PATCH 167/241] New translations publish-subgraph.mdx (Spanish) --- pages/es/developer/publish-subgraph.mdx | 26 ++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/es/developer/publish-subgraph.mdx b/pages/es/developer/publish-subgraph.mdx index 2f35f5eb1bae..2d0a971c4286 100644 --- a/pages/es/developer/publish-subgraph.mdx +++ b/pages/es/developer/publish-subgraph.mdx @@ -1,27 +1,27 @@ --- -title: Publish a Subgraph to the Decentralized Network +title: Publicar un Subgrafo en la Red Descentralizada --- -Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. +Una vez que tu subgrafo ha sido [desplegado en el Subgraph Studio](/studio/deploy-subgraph-studio), lo has probado y estás listo para ponerlo en producción, puedes publicarlo en la red descentralizada. -Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. +La publicación de un Subgrafo en la red descentralizada hace que esté disponible para que los [curadores](/curating) comiencen a curar en él, y para que los [indexadores](/indexing) comiencen a indexarlo. -For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). +Para ver un tutorial sobre cómo publicar un subgrafo en la red descentralizada, consulta [este video](https://youtu.be/HfDgC2oNnwo?t=580). -### Networks +### Redes -The decentralized network currently supports both Rinkeby and Ethereum Mainnet. +La red descentralizada admite actualmente tanto Rinkeby como Ethereum Mainnet. -### Publishing a subgraph +### Publicar un subgrafo -Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). +Los subgrafos se pueden publicar en la red descentralizada directamente desde el panel de control de Subgraph Studio haciendo clic en el botón **Publish**. Una vez publicado un subgrafo, estará disponible para su visualización en The [Graph Explorer](https://thegraph.com/explorer/). -- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. +- Los subgrafos publicados en Rinkeby pueden indexar y consultar datos de la red Rinkeby o de la red principal de Ethereum. -- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. +- Los subgrafos publicados en la red principal (mainnet) de Ethereum sólo pueden indexar y consultar datos de la red principal de Ethereum, lo que significa que no se pueden publicar subgrafos en la red descentralizada principal que indexen y consulten datos de la red de prueba (testnet). -- When publishing a new version for an existing subgraph the same rules apply as above. +- Cuando se publica una nueva versión para un subgrafo existente se aplican las mismas reglas que las anteriores. -### Updating metadata for a published subgraph +### Actualización de los metadatos de un subgrafo publicado -Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. +Una vez que tu subgrafo ha sido publicado en la red descentralizada, puedes modificar los metadatos en cualquier momento haciendo la actualización en el panel de control de Subgraph Studio del subgrafo. Luego de guardar los cambios y publicar tus actualizaciones en la red, éstas se reflejarán en The Graph Explorer. Esto no creará una nueva versión, ya que tu despliegue no ha cambiado. From 2b2417a20b5c3fd5f76fb4c55ac1af1102016b1d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:31 -0500 Subject: [PATCH 168/241] New translations publish-subgraph.mdx (Arabic) --- pages/ar/developer/publish-subgraph.mdx | 26 ++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/ar/developer/publish-subgraph.mdx b/pages/ar/developer/publish-subgraph.mdx index 2f35f5eb1bae..3d51eccafeed 100644 --- a/pages/ar/developer/publish-subgraph.mdx +++ b/pages/ar/developer/publish-subgraph.mdx @@ -1,27 +1,27 @@ --- -title: Publish a Subgraph to the Decentralized Network +title: نشر Subgraph للشبكة اللامركزية --- -Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. +بمجرد أن الـ subgraph الخاص بك [قد تم نشره لـ Subgraph Studio](/studio/deploy-subgraph-studio) ، وقمت باختباره ، وأصبحت جاهزا لوضعه في الإنتاج ، يمكنك بعد ذلك نشره للشبكة اللامركزية. -Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. +يؤدي نشر Subgraph على الشبكة اللامركزية إلى الإتاحة [ للمنسقين ](/curating) لبدء التنسيق، و [ للمفهرسين](/indexing) لبدء الفهرسة. -For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). +للحصول على إرشادات حول كيفية نشر subgraph على الشبكة اللامركزية ، راجع [ هذا الفيديو ](https://youtu.be/HfDgC2oNnwo؟t=580). -### Networks +### الشبكات -The decentralized network currently supports both Rinkeby and Ethereum Mainnet. +تدعم الشبكة اللامركزية حاليا كلا من Rinkeby و Ethereum Mainnet. -### Publishing a subgraph +### نشر subgraph -Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). +يمكن نشر الـ Subgraphs على الشبكة اللامركزية مباشرة من Subgraph Studio dashboard بالنقر فوق الزر ** Publish **. بمجرد نشر الـ subgraph ، فإنه سيكون متاحا للعرض في [ Graph Explorer ](https://thegraph.com/explorer/). -- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. +- يمكن لـ Subgraphs المنشور على Rinkeby فهرسة البيانات والاستعلام عنها من شبكة Rinkeby أو Ethereum Mainnet. -- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. +- يمكن لـ Subgraphs المنشور على Ethereum Mainnet فقط فهرسة البيانات والاستعلام عنها من Ethereum Mainnet ، مما يعني أنه لا يمكنك نشر الـ subgraphs على الشبكة اللامركزية الرئيسية التي تقوم بفهرسة بيانات testnet والاستعلام عنها. -- When publishing a new version for an existing subgraph the same rules apply as above. +- عند نشر نسخة جديدة لـ subgraph حالي ، تنطبق عليه نفس القواعد أعلاه. -### Updating metadata for a published subgraph +### تحديث بيانات الـ subgraph المنشور -Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. +بمجرد نشر الـ subgraph الخاص بك على الشبكة اللامركزية ، يمكنك تعديل البيانات الوصفية في أي وقت عن طريق إجراء التحديث في Subgraph Studio dashboard لـ subgraph. بعد حفظ التغييرات ونشر تحديثاتك على الشبكة ، ستنعكس في the Graph Explorer. لن يؤدي هذا إلى إنشاء إصدار جديد ، لأن النشر الخاص بك لم يتغير. From 6c4da9ad55782f14ba0779f5e5105d0c35221f6a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:32 -0500 Subject: [PATCH 169/241] New translations publish-subgraph.mdx (Japanese) --- pages/ja/developer/publish-subgraph.mdx | 26 ++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/pages/ja/developer/publish-subgraph.mdx b/pages/ja/developer/publish-subgraph.mdx index 2f35f5eb1bae..e2458c5412d8 100644 --- a/pages/ja/developer/publish-subgraph.mdx +++ b/pages/ja/developer/publish-subgraph.mdx @@ -1,27 +1,27 @@ --- -title: Publish a Subgraph to the Decentralized Network +title: 分散型ネットワークへのサブグラフの公開 --- -Once your subgraph has been [deployed to the Subgraph Studio](/studio/deploy-subgraph-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network. +サブグラフが [Subgraph Studioにデプロイ](/studio/deploy-subgraph-studio)され、それをテストし、本番の準備ができたら、分散型ネットワークにパブリッシュすることができます。 -Publishing a Subgraph to the decentralized network makes it available for [curators](/curating) to begin curating on it, and [indexers](/indexing) to begin indexing it. +サブグラフを分散型ネットワークに公開すると、[キュレーター](/curating)がキュレーションを開始したり、[インデクサー](/indexing)がインデックスを作成したりできるようになります。 -For a walkthrough of how to publish a subgraph to the decentralized network, see [this video](https://youtu.be/HfDgC2oNnwo?t=580). +分散型ネットワークにサブグラフを公開する方法については、[こちらのビデオ](https://youtu.be/HfDgC2oNnwo?t=580)をご覧ください。 -### Networks +### ネットワーク -The decentralized network currently supports both Rinkeby and Ethereum Mainnet. +分散型ネットワークは現在、RinkebyとEthereum Mainnetの両方をサポートしています。 -### Publishing a subgraph +### サブグラフの公開 -Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/). +サブグラフは、Subgraph Studioのダッシュボードから**Publish** ボタンをクリックすることで、直接分散型ネットワークに公開することができます。 サブグラフが公開されると、[Graph Explorer](https://thegraph.com/explorer/)で閲覧できるようになります。 -- Subgraphs published to Rinkeby can index and query data from either the Rinkeby network or Ethereum Mainnet. +- Rinkebyに公開されたサブグラフは、RinkebyネットワークまたはEthereum Mainnetのいずれかからデータをインデックス化してクエリすることができます。 -- Subgraphs published to Ethereum Mainnet can only index and query data from Ethereum Mainnet, meaning that you cannot publish subgraphs to the main decentralized network that index and query testnet data. +- Ethereum Mainnetに公開されたサブグラフは、Ethereum Mainnetのデータのみをインデックス化してクエリすることができます。つまり、テストネットのデータをインデックス化して照会するサブグラフをメインの分散型ネットワークに公開することはできません。 -- When publishing a new version for an existing subgraph the same rules apply as above. +- 既存のサブグラフの新バージョンを公開する場合は、上記と同じルールが適用されます。 -### Updating metadata for a published subgraph +### 公開されたサブグラフのメタデータの更新 -Once your subgraph has been published to the decentralized network, you can modify the metadata at any time by making the update in the Subgraph Studio dashboard of the subgraph. After saving the changes and publishing your updates to the network, they will be reflected in the Graph Explorer. This won’t create a new version, as your deployment hasn’t changed. +サブグラフが分散型ネットワークに公開されると、サブグラフのSubgraph Studioダッシュボードで更新を行うことにより、いつでもメタデータを変更することができます。 変更を保存し、更新内容をネットワークに公開すると、グラフエクスプローラーに反映されます。 デプロイメントが変更されていないため、新しいバージョンは作成されません。 From 49928c125687e14606d3a137772ea600af720b72 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:35 -0500 Subject: [PATCH 170/241] New translations query-the-graph.mdx (Spanish) --- pages/es/developer/query-the-graph.mdx | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/es/developer/query-the-graph.mdx b/pages/es/developer/query-the-graph.mdx index ae480b1e6883..f21700f082b8 100644 --- a/pages/es/developer/query-the-graph.mdx +++ b/pages/es/developer/query-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Query The Graph +title: Consultar The Graph --- -With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +Con el subgrafo desplegado, visita el [Graph Explorer](https://thegraph.com/explorer) para abrir una [interfaz GraphQL](https://github.com/graphql/graphiql) en la que podrás explorar la API GraphQL desplegada para el subgrafo emitiendo consultas y viendo el esquema. -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +A continuación se proporciona un ejemplo, pero por favor, consulta la [Query API](/developer/graphql-api) para obtener una referencia completa sobre cómo consultar las entidades del subgrafo. -#### Example +#### Ejemplo -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +Estas listas de consultas muestran todos los contadores que nuestro mapeo ha creado. Como sólo creamos uno, el resultado sólo contendrá nuestro único `default-counter`: ```graphql { @@ -19,14 +19,14 @@ This query lists all the counters our mapping has created. Since we only create } ``` -## Using The Graph Explorer +## Uso de The Graph Explorer -Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. +Cada subgrafo publicado en The Graph Explorer descentralizado tiene una URL de consulta única que puedes encontrar navegando a la página de detalles del subgrafo y haciendo clic en el botón "Query (Consulta)" en la esquina superior derecha. Esto abrirá un panel lateral que te dará la URL de consulta única del subgrafo, así como algunas instrucciones sobre cómo consultarlo. -![Query Subgraph Pane](/img/query-subgraph-pane.png) +![Panel de Consulta de Subgrafos](/img/query-subgraph-pane.png) -As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). +Como puede observar, esta URL de consulta debe utilizar una clave de API única. Puedes crear y gestionar tus claves API en el [Subgraph Studio](https://thegraph.com/studio) en la sección "API Keys (Claves API)". Aprende a utilizar Subgraph Studio [aquí](/studio/subgraph-studio). -Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). +La consulta de subgrafos utilizando tus claves API generará tasas de consulta que se pagarán en GRT. Puedes obtener más información sobre la facturación [aquí](/studio/billing). -You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. +También puedes utilizar el playground GraphQL en la pestaña "Playground" para consultar un subgrafo dentro de The Graph Explorer. From 2de07028788ea7b363b908dacf12f4a0b509a9e7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:35 -0500 Subject: [PATCH 171/241] New translations query-the-graph.mdx (Arabic) --- pages/ar/developer/query-the-graph.mdx | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/pages/ar/developer/query-the-graph.mdx b/pages/ar/developer/query-the-graph.mdx index ae480b1e6883..776fbcb6bed1 100644 --- a/pages/ar/developer/query-the-graph.mdx +++ b/pages/ar/developer/query-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Query The Graph +title: الاستعلام عن The Graph --- -With the subgraph deployed, visit the [Graph Explorer](https://thegraph.com/explorer) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +بالـ subgraph المنشور ، قم بزيارة [ Graph Explorer ](https://thegraph.com/explorer) لفتح واجهة [ GraphiQL ](https://github.com/graphql/graphiql) حيث يمكنك استكشاف GraphQL API المنشورة لـ subgraph عن طريق إصدار الاستعلامات وعرض المخطط. -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +تم توفير المثال أدناه ، ولكن يرجى الاطلاع على [Query API](/developer/graphql-api) للحصول على مرجع كامل حول كيفية الاستعلام عن كيانات الـ subgraph. -#### Example +#### مثال -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +يسرد هذا الاستعلام جميع العدادات التي أنشأها الـ mapping الخاص بنا. نظرا لأننا أنشأنا واحدا فقط ، فستحتوي النتيجة فقط على `default-counter`: ```graphql { @@ -19,14 +19,14 @@ This query lists all the counters our mapping has created. Since we only create } ``` -## Using The Graph Explorer +## استخدام The Graph Explorer -Each subgraph published to the decentralized Graph Explorer has a unique query URL that you can find by navigating to the subgraph details page and clicking on the "Query" button on the top right corner. This will open a side pane that will give you the unique query URL of the subgraph as well as some instructions about how to query it. +يحتوي كل subgraph منشور على Graph Explorer اللامركزي على عنوان URL فريد للاستعلام والذي يمكنك العثور عليه بالانتقال إلى صفحة تفاصيل الـ subgraph والنقر على "Query" في الزاوية اليمنى العليا. سيؤدي هذا إلى فتح نافذة جانبية والتي تمنحك عنوان URL فريد للاستعلام لـ subgraph بالإضافة إلى بعض الإرشادات حول كيفية الاستعلام عنه. -![Query Subgraph Pane](/img/query-subgraph-pane.png) +![نافذة الاستعلام عن Subgraph](/img/query-subgraph-pane.png) -As you can notice, this query URL must use a unique API key. You can create and manage your API keys in the [Subgraph Studio](https://thegraph.com/studio) in the "API Keys" section. Learn more about how to use Subgraph Studio [here](/studio/subgraph-studio). +كما يمكنك أن تلاحظ ، أنه يجب أن يستخدم عنوان الاستعلام URL مفتاح API فريد. يمكنك إنشاء وإدارة مفاتيح API الخاصة بك في [ Subgraph Studio ](https://thegraph.com/studio) في قسم "API Keys". تعرف على المزيد حول كيفية استخدام Subgraph Studio [ هنا ](/studio/subgraph-studio). -Querying subgraphs using your API keys will generate query fees that will be paid in GRT. You can learn more about billing [here](/studio/billing). +سيؤدي الاستعلام عن الـ subgraphs باستخدام مفاتيح API إلى إنشاء رسوم الاستعلام التي سيتم دفعها كـ GRT. يمكنك معرفة المزيد حول الفوترة [ هنا ](/studio/billing). -You can also use the GraphQL playground in the "Playground" tab to query a subgraph within The Graph Explorer. +يمكنك أيضا استخدام GraphQL playground في علامة التبويب "Playground" للاستعلام عن subgraph داخل The Graph Explorer. From 749b4226d9d87660386ec37feac26c9c52f02878 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:36 -0500 Subject: [PATCH 172/241] New translations migrating-subgraph.mdx (Korean) --- pages/ko/hosted-service/migrating-subgraph.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ko/hosted-service/migrating-subgraph.mdx b/pages/ko/hosted-service/migrating-subgraph.mdx index 85f72f053b30..260f084c0e7d 100644 --- a/pages/ko/hosted-service/migrating-subgraph.mdx +++ b/pages/ko/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## Introduction +## 소개 This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From 53993e110f156a6ac8a2075eb53dba45a77e1dc2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:37 -0500 Subject: [PATCH 173/241] New translations migrating-subgraph.mdx (Chinese Simplified) --- pages/zh/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/hosted-service/migrating-subgraph.mdx b/pages/zh/hosted-service/migrating-subgraph.mdx index 85f72f053b30..979d684faeed 100644 --- a/pages/zh/hosted-service/migrating-subgraph.mdx +++ b/pages/zh/hosted-service/migrating-subgraph.mdx @@ -2,7 +2,7 @@ title: Migrating an Existing Subgraph to The Graph Network --- -## Introduction +## 介绍 This is a guide for the migration of subgraphs from the Hosted Service (also known as the Hosted Service) to The Graph Network. The migration to The Graph Network has been successful for projects like Opyn, UMA, mStable, Audius, PoolTogether, Livepeer, RAI, Enzyme, DODO, Opyn, Pickle, and BadgerDAO all of which are relying on data served by Indexers on the network. There are now over 200 subgraphs live on The Graph Network, generating query fees and actively indexing web3 data. From 17ebcb1127005ad9e8c77ee032cd8df29f296942 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:40 -0500 Subject: [PATCH 174/241] New translations studio-faq.mdx (Spanish) --- pages/es/studio/studio-faq.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/es/studio/studio-faq.mdx b/pages/es/studio/studio-faq.mdx index 4db4d7ccddaa..8ed8de7d106c 100644 --- a/pages/es/studio/studio-faq.mdx +++ b/pages/es/studio/studio-faq.mdx @@ -1,21 +1,21 @@ --- -title: Subgraph Studio FAQs +title: Preguntas Frecuentes sobre Subgraph Studio --- -### 1. How do I create an API Key? +### 1. ¿Cómo puedo crear una clave API? -In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. +En Subgraph Studio, puedes crear las claves de la API que necesites y añadir configuraciones de seguridad a cada una de ellas. -### 2. Can I create multiple API Keys? +### 2. ¿Puedo crear varias claves API? -A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +R: ¡Sí! Puedes crear varias claves API para utilizarlas en diferentes proyectos. Consulta el enlace [aquí](https://thegraph.com/studio/apikeys/). -### 3. How do I restrict a domain for an API Key? +### 3. ¿Cómo puedo restringir un dominio para una clave API? -After creating an API Key, in the Security section you can define the domains that can query a specific API Key. +Después de crear una Clave de API, en la sección Seguridad puedes definir los dominios que pueden consultar una Clave de API específica. -### 4. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +### 4. ¿Cómo puedo encontrar las URL de consulta de los subgrafos si no soy el desarrollador del subgrafo que quiero utilizar? -You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in the Subgraph Studio. +Puedes encontrar la URL de consulta de cada subgrafo en la sección Detalles del Subgrafo de the Graph Explorer. Al hacer clic en el botón "Query", se te dirigirá a un panel en el que podrás ver la URL de consulta del subgrafo te interesa. A continuación, puedes sustituir el marcador de posición `` por la clave de la API que deseas aprovechar en el Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Recuerda que puedes crear una clave API y consultar cualquier subgrafo publicado en la red, incluso si tú mismo construyes un subgrafo. Estas consultas a través de la nueva clave API, son consultas pagas como cualquier otra en la red. From d5a75274ecd0b43e1cfce838dac66a7b13614b5c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:41 -0500 Subject: [PATCH 175/241] New translations studio-faq.mdx (Arabic) --- pages/ar/studio/studio-faq.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/pages/ar/studio/studio-faq.mdx b/pages/ar/studio/studio-faq.mdx index 4db4d7ccddaa..20b2ffb13a5e 100644 --- a/pages/ar/studio/studio-faq.mdx +++ b/pages/ar/studio/studio-faq.mdx @@ -1,14 +1,14 @@ --- -title: Subgraph Studio FAQs +title: الأسئلة الشائعة حول Subgraph Studio --- -### 1. How do I create an API Key? +### 1. كيف يمكنني إنشاء مفتاح API؟ -In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. +في Subgraph Studio ، يمكنك إنشاء API Keys وذلك حسب الحاجة وإضافة إعدادات الأمان لكل منها. -### 2. Can I create multiple API Keys? +### 2. هل يمكنني إنشاء أكثر من API Keys؟ -A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +A: نعم يمكنك إنشاء أكثر من API Keys وذلك لاستخدامها في مشاريع مختلفة. تحقق من الرابط [هنا](https://thegraph.com/studio/apikeys/). ### 3. How do I restrict a domain for an API Key? From 76245fc02df50980faadacbc9e383834f41a4b46 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:43 -0500 Subject: [PATCH 176/241] New translations studio-faq.mdx (Chinese Simplified) --- pages/zh/studio/studio-faq.mdx | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/pages/zh/studio/studio-faq.mdx b/pages/zh/studio/studio-faq.mdx index 4db4d7ccddaa..b5f894110682 100644 --- a/pages/zh/studio/studio-faq.mdx +++ b/pages/zh/studio/studio-faq.mdx @@ -1,21 +1,21 @@ --- -title: Subgraph Studio FAQs +title: 子图工作室常见问题 --- -### 1. How do I create an API Key? +### 1. 我如何创建一个 API 密钥? -In the Subgraph Studio, you can create API Keys as needed and add security settings to each of them. +在 Subgraph Studio 中,你可以根据需要创建 API 密钥,并为每个密钥添加安全设置。 -### 2. Can I create multiple API Keys? +### 2. 我可以创建多个 API 密钥吗? -A: Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +是的,可以。 你可以创建多个 API 密钥,在不同的项目中使用。 点击 [这里](https://thegraph.com/studio/apikeys/)查看。 -### 3. How do I restrict a domain for an API Key? +### 3. 我如何为 API 密钥限制一个域名? -After creating an API Key, in the Security section you can define the domains that can query a specific API Key. +创建了 API 密钥后,在安全部分,你可以定义可以查询特定 API 密钥的域。 -### 4. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +### 4. 如果我不是我想使用的子图的开发者,我怎样才能找到子图的查询 URL? -You can find the query URL of each subgraph in the Subgraph Details section of The Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in the Subgraph Studio. +你可以在 The Graph Explorer 的 Subgraph Details 部分找到每个子图的查询 URL。 当你点击 "查询 "按钮时,你将被引导到一个窗格,在这里你可以查看你感兴趣的子图的查询 URL。 然后你可以把 `api_key` 占位符替换成你想在 Subgraph Studio 中利用的 API 密钥。 -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +请记住,你可以创建一个 API 密钥并查询发布到网络上的任何子图,即使你自己建立了一个子图。 这些通过新的 API 密钥进行的查询,与网络上的任何其他查询一样,都是付费查询。 From 87926c1a89aa7ae0eb5ae1fb052097099b1438ff Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:44 -0500 Subject: [PATCH 177/241] New translations subgraph-studio.mdx (Spanish) --- pages/es/studio/subgraph-studio.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/studio/subgraph-studio.mdx b/pages/es/studio/subgraph-studio.mdx index 9af3926db3df..28cfadea4edc 100644 --- a/pages/es/studio/subgraph-studio.mdx +++ b/pages/es/studio/subgraph-studio.mdx @@ -36,7 +36,7 @@ The best part! When you first create a subgraph, you’ll be directed to fill ou - Your Subgraph Name - Image -- Description +- Descripción - Categories - Website @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From 90ebb9bcec1cac578b62af14b5a9499959655048 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:45 -0500 Subject: [PATCH 178/241] New translations multisig.mdx (Spanish) --- pages/es/studio/multisig.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/studio/multisig.mdx b/pages/es/studio/multisig.mdx index 164835bdb8a4..7b0f55c22ffb 100644 --- a/pages/es/studio/multisig.mdx +++ b/pages/es/studio/multisig.mdx @@ -4,7 +4,7 @@ title: Using a Multisig Wallet Subgraph Studio currently doesn't support signing with multisig wallets. Until then, you can follow this guide on how to publish your subgraph by invoking the [GNS contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol) functions. -### Create a Subgraph +### Crear un Subgrafo Similary to using a regular wallet, you can create a subgraph by connecting your non-multisig wallet in Subgraph Studio. Once you connect the wallet, simply create a new subgraph. Make sure you fill out all the details, such as subgraph name, description, image, website, and source code url if applicable. From 8a7a4fceec76df43a0f79ebef6fdced53587028c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:46 -0500 Subject: [PATCH 179/241] New translations subgraph-studio.mdx (Arabic) --- pages/ar/studio/subgraph-studio.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/studio/subgraph-studio.mdx b/pages/ar/studio/subgraph-studio.mdx index 9af3926db3df..d4e82eeef02e 100644 --- a/pages/ar/studio/subgraph-studio.mdx +++ b/pages/ar/studio/subgraph-studio.mdx @@ -36,7 +36,7 @@ The best part! When you first create a subgraph, you’ll be directed to fill ou - Your Subgraph Name - Image -- Description +- الوصف - Categories - Website @@ -47,7 +47,7 @@ The Graph Network is not yet able to support all of the data-sources & features - Index mainnet Ethereum - Must not use any of the following features: - ipfs.cat & ipfs.map - - Non-fatal errors + - أخطاء غير فادحة - Grafting More features & networks will be added to The Graph Network incrementally. @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From 17c5d126ae8702749da9dd95e611f6cd67a7bb60 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:48 -0500 Subject: [PATCH 180/241] New translations subgraph-studio.mdx (Japanese) --- pages/ja/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/studio/subgraph-studio.mdx b/pages/ja/studio/subgraph-studio.mdx index 9af3926db3df..1f5ecf6a7011 100644 --- a/pages/ja/studio/subgraph-studio.mdx +++ b/pages/ja/studio/subgraph-studio.mdx @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From e84d4458a04507bdf041d232d49f885e7a6fdc59 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:50 -0500 Subject: [PATCH 181/241] New translations subgraph-studio.mdx (Korean) --- pages/ko/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/studio/subgraph-studio.mdx b/pages/ko/studio/subgraph-studio.mdx index 9af3926db3df..562d588ef26d 100644 --- a/pages/ko/studio/subgraph-studio.mdx +++ b/pages/ko/studio/subgraph-studio.mdx @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From 48a36e56d1caac54a5a1a7339a48487285734b66 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:51 -0500 Subject: [PATCH 182/241] New translations subgraph-studio.mdx (Vietnamese) --- pages/vi/studio/subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/vi/studio/subgraph-studio.mdx b/pages/vi/studio/subgraph-studio.mdx index 9af3926db3df..3bb38004be5a 100644 --- a/pages/vi/studio/subgraph-studio.mdx +++ b/pages/vi/studio/subgraph-studio.mdx @@ -70,7 +70,7 @@ You’ve made it this far - congrats! Publishing your subgraph means that an IPF From fe6660351e27e10ca88876d614a7e00724471dc3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:53 -0500 Subject: [PATCH 183/241] New translations near.mdx (Spanish) --- pages/es/supported-networks/near.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/es/supported-networks/near.mdx b/pages/es/supported-networks/near.mdx index 288ac380494c..f86cb2b89c0d 100644 --- a/pages/es/supported-networks/near.mdx +++ b/pages/es/supported-networks/near.mdx @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## FAQ +## Preguntas frecuentes ### How does the beta work? From 7b3abc559a3d8c17bd6e6ab968b439e6691787a7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:54 -0500 Subject: [PATCH 184/241] New translations near.mdx (Arabic) --- pages/ar/supported-networks/near.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/supported-networks/near.mdx b/pages/ar/supported-networks/near.mdx index 288ac380494c..c364fd4ecf89 100644 --- a/pages/ar/supported-networks/near.mdx +++ b/pages/ar/supported-networks/near.mdx @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## FAQ +## الأسئلة الشائعة ### How does the beta work? From 58e2a3f7ae0bab339a91610ef87a0a5ef226f722 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:55 -0500 Subject: [PATCH 185/241] New translations near.mdx (Japanese) --- pages/ja/supported-networks/near.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/supported-networks/near.mdx b/pages/ja/supported-networks/near.mdx index 288ac380494c..0965bdee1675 100644 --- a/pages/ja/supported-networks/near.mdx +++ b/pages/ja/supported-networks/near.mdx @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## FAQ +## よくある質問 ### How does the beta work? From d1988e9910b65de29042914f3b56710f30027389 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:57 -0500 Subject: [PATCH 186/241] New translations near.mdx (Chinese Simplified) --- pages/zh/supported-networks/near.mdx | 54 ++++++++++++++-------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/pages/zh/supported-networks/near.mdx b/pages/zh/supported-networks/near.mdx index 288ac380494c..e5980fba4e95 100644 --- a/pages/zh/supported-networks/near.mdx +++ b/pages/zh/supported-networks/near.mdx @@ -1,56 +1,56 @@ --- -title: Building Subgraphs on NEAR +title: 在 NEAR 上构建子图 --- -> NEAR support in Graph Node and on the Hosted Service is in beta: please contact near@thegraph.com with any questions about building NEAR subgraphs! +> Graph节点和托管服务中对NEAR 的支持目前处于测试阶段:任何有关构建 NEAR 子图的任何问题,请联系 near@thegraph.com! -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +本指南介绍了如何在[NEAR区块链](https://docs.near.org/)上构建索引智能合约的子图。 -## What is NEAR? +## NEAR是什么? -[NEAR](https://near.org/) is a smart contract platform for building decentralised applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information. +[NEAR](https://near.org/) 是一个用于构建去中心化应用程序的智能合约平台。 请访问 [官方文档](https://docs.near.org/docs/concepts/new-to-near) 了解更多信息。 -## What are NEAR subgraphs? +## NEAR子图是什么? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +Graph 为开发人员提供了一种被称为子图的工具,利用这个工具,开发人员能够处理区块链事件,并通过 GraphQL API提供结果数据。 [Graph节点](https://github.com/graphprotocol/graph-node)现在能够处理 NEAR 事件,这意味着 NEAR 开发人员现在可以构建子图来索引他们的智能合约。 -Subgraphs are event-based, which means that they listen for and then process on-chain events. There are currently two types of handlers supported for NEAR subgraphs: +子图是基于事件的,这意味着子图可以侦听并处理链上事件。 NEAR 子图目前支持两种类型的处理程序: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- 区块处理器: 这些处理程序在每个新区块上运行 +- 收据处理器: 每次在指定帐户上一个消息被执行时运行。 -[From the NEAR documentation](https://docs.near.org/docs/concepts/transaction#receipt): +[NEAR 文档中](https://docs.near.org/docs/concepts/transaction#receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Receipt是系统中唯一可操作的对象。 当我们在 NEAR 平台上谈论“处理交易”时,这最终意味着在某个时候“应用收据”。 -## Building a NEAR Subgraph +## 构建NEAR子图 -`@graphprotocol/graph-cli` is a command line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli`是一个用于构建和部署子图的命令行工具。 -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` 是子图特定类型的库。 -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR子图开发需要`0.23.0`以上版本的`graph-cli`,以及 `0.23.0`以上版本的`graph-ts`。 -> Building a NEAR subgraph is very similar to building a subgraph which indexes Ethereum. +> 构建 NEAR 子图与构建索引以太坊的子图非常相似。 -There are three aspects of subgraph definition: +子图定义包括三个方面: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** 子图清单,定义感兴趣的数据源以及如何处理它们。 NEAR 是一种全新`类型`数据源。 -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developer/create-subgraph-hosted#the-graphql-schema). +**schema.graphql:** 一个模式文件,它定义为您的子图存储哪些数据,以及如何通过 GraphQL 查询它。 NEAR 子图的要求包含在 [现有文档](/developer/create-subgraph-hosted#the-graphql-schema)中。 -**AssemblyScript Mappings:** [AssemblyScript code](/developer/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types, and new JSON parsing functionality. +**AssemblyScript 映射:**将事件数据转换为模式文件中定义的实体的[AssemblyScript 代码](/developer/assemblyscript-api)。 NEAR 支持引入了 NEAR 特定的数据类型和新的JSON 解析功能。 -During subgraph development there are two key commands: +在子图开发过程中,有两个关键命令: ```bash -$ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph codegen # 从清单中标识的模式文件生成类型 +$ graph build # 从 AssemblyScript 文件生成 Web Assembly,并在 /build 文件夹中准备所有子图文件 ``` -### Subgraph Manifest Definition +### 子图清单定义 -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:: +子图清单(`subgraph.yaml`)标识子图的数据源、感兴趣的触发器以及响应这些触发器而运行的函数。 以下是一个NEAR 的子图清单的例子: ```yaml specVersion: 0.0.2 @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## FAQ +## 常见问题 ### How does the beta work? From 1fe4862af08747be8497f095f53c5dbd8afb9a32 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:58 -0500 Subject: [PATCH 187/241] New translations multisig.mdx (Arabic) --- pages/ar/studio/multisig.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ar/studio/multisig.mdx b/pages/ar/studio/multisig.mdx index 164835bdb8a4..555ba11f9da9 100644 --- a/pages/ar/studio/multisig.mdx +++ b/pages/ar/studio/multisig.mdx @@ -4,7 +4,7 @@ title: Using a Multisig Wallet Subgraph Studio currently doesn't support signing with multisig wallets. Until then, you can follow this guide on how to publish your subgraph by invoking the [GNS contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/discovery/GNS.sol) functions. -### Create a Subgraph +### إنشاء الـ Subgraph Similary to using a regular wallet, you can create a subgraph by connecting your non-multisig wallet in Subgraph Studio. Once you connect the wallet, simply create a new subgraph. Make sure you fill out all the details, such as subgraph name, description, image, website, and source code url if applicable. From d3702e075be3fa6d81587dc5987da89b0bf17862 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:10:59 -0500 Subject: [PATCH 188/241] New translations migrating-subgraph.mdx (Vietnamese) --- pages/vi/hosted-service/migrating-subgraph.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/vi/hosted-service/migrating-subgraph.mdx b/pages/vi/hosted-service/migrating-subgraph.mdx index 85f72f053b30..e81d98dd5e2a 100644 --- a/pages/vi/hosted-service/migrating-subgraph.mdx +++ b/pages/vi/hosted-service/migrating-subgraph.mdx @@ -139,7 +139,7 @@ If you're still confused, fear not! Check out the following resources or watch o From 2e0a5c93914e5067d1dac1cccec0d13f7c5bc41d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:00 -0500 Subject: [PATCH 189/241] New translations what-is-hosted-service.mdx (Chinese Simplified) --- pages/zh/hosted-service/what-is-hosted-service.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/zh/hosted-service/what-is-hosted-service.mdx b/pages/zh/hosted-service/what-is-hosted-service.mdx index 7f604c8dc31a..24d7068c1b44 100644 --- a/pages/zh/hosted-service/what-is-hosted-service.mdx +++ b/pages/zh/hosted-service/what-is-hosted-service.mdx @@ -1,8 +1,8 @@ --- -title: What is the Hosted Service? +title: 什么是托管服务? --- -This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) +本节将引导您将子图部署到 [托管服务](https://thegraph.com/hosted-service/) 提醒一下,托管服务不会很快关闭。 一旦去中心化网络达到托管服务相当的功能,我们将逐步取消托管服务。 您在托管服务上部署的子图在[此处](https://thegraph.com/hosted-service/)仍然可用。 If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. @@ -42,9 +42,9 @@ graph init --from-example --product hosted-service / The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. -## Supported Networks on the Hosted Service +## 托管服务支持的网络 -Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) +请注意托管服务支持以下网络。 [Graph Explorer](https://thegraph.com/explorer)目前不支持以太坊主网(“主网”)之外的网络。 - `mainnet` - `kovan` From 3394056d1b5c4e6fd991d208897b1b9c593c9892 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:01 -0500 Subject: [PATCH 190/241] New translations query-hosted-service.mdx (Spanish) --- .../es/hosted-service/query-hosted-service.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/es/hosted-service/query-hosted-service.mdx b/pages/es/hosted-service/query-hosted-service.mdx index 731e3a3120b2..cdb6bf9f8135 100644 --- a/pages/es/hosted-service/query-hosted-service.mdx +++ b/pages/es/hosted-service/query-hosted-service.mdx @@ -1,14 +1,14 @@ --- -title: Query the Hosted Service +title: Consultas en el Sistema Alojado --- -With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +Con el subgrafo desplegado, visita el [Servicio alojado](https://thegraph.com/hosted-service/) para abrir una interfaz [GraphiQL](https://github.com/graphql/graphiql) donde puedes explorar la API GraphQL desplegada para el subgrafo emitiendo consultas y viendo el esquema. -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +A continuación se proporciona un ejemplo, pero por favor, consulta la [Query API](/developer/graphql-api) para obtener una referencia completa sobre cómo consultar las entidades del subgrafo. -#### Example +#### Ejemplo -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +Estas listas de consultas muestran todos los contadores que nuestro mapeo ha creado. Como sólo creamos uno, el resultado sólo contendrá nuestro único `default-counter`: ```graphql { @@ -19,10 +19,10 @@ This query lists all the counters our mapping has created. Since we only create } ``` -## Using The Hosted Service +## Utilización del Servicio Alojado -The Graph Explorer and its GraphQL playground is a useful way to explore and query deployed subgraphs on the Hosted Service. +The Graph Explorer y su playground GraphQL es una forma útil de explorar y consultar los subgrafos desplegados en el Servicio Alojado. -Some of the main features are detailed below: +A continuación se detallan algunas de las principales características: -![Explorer Playground](/img/explorer-playground.png) +![Explora el Playground](/img/explorer-playground.png) From ec9bcad7095fd11b634f265f509ec62c4fb1d2ff Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:03 -0500 Subject: [PATCH 191/241] New translations query-hosted-service.mdx (Arabic) --- pages/ar/hosted-service/query-hosted-service.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/hosted-service/query-hosted-service.mdx b/pages/ar/hosted-service/query-hosted-service.mdx index 731e3a3120b2..fd7de3b535a2 100644 --- a/pages/ar/hosted-service/query-hosted-service.mdx +++ b/pages/ar/hosted-service/query-hosted-service.mdx @@ -4,11 +4,11 @@ title: Query the Hosted Service With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +تم توفير المثال أدناه ، ولكن يرجى الاطلاع على [Query API](/developer/graphql-api) للحصول على مرجع كامل حول كيفية الاستعلام عن كيانات الـ subgraph. -#### Example +#### مثال -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +يسرد هذا الاستعلام جميع العدادات التي أنشأها الـ mapping الخاص بنا. نظرا لأننا أنشأنا واحدا فقط ، فستحتوي النتيجة فقط على `default-counter`: ```graphql { From 95465b09edc16e098cc66a09b3ee3f141c993a2b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:04 -0500 Subject: [PATCH 192/241] New translations query-hosted-service.mdx (Japanese) --- pages/ja/hosted-service/query-hosted-service.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ja/hosted-service/query-hosted-service.mdx b/pages/ja/hosted-service/query-hosted-service.mdx index 731e3a3120b2..0fe2dbf03bb0 100644 --- a/pages/ja/hosted-service/query-hosted-service.mdx +++ b/pages/ja/hosted-service/query-hosted-service.mdx @@ -4,11 +4,11 @@ title: Query the Hosted Service With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +以下に例を示しますが、サブグラフのエンティティへのクエリの方法については、[Query API](/developer/graphql-api)を参照してください。 #### Example -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +このクエリは、マッピングが作成したすべてのカウンターを一覧表示します。 作成するのは 1 つだけなので、結果には 1 つの`デフォルトカウンター ```graphql { From b852350a66aff252277358c95d69a760ab95c2fa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:05 -0500 Subject: [PATCH 193/241] New translations query-hosted-service.mdx (Chinese Simplified) --- .../zh/hosted-service/query-hosted-service.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/pages/zh/hosted-service/query-hosted-service.mdx b/pages/zh/hosted-service/query-hosted-service.mdx index 731e3a3120b2..ad41c4bede90 100644 --- a/pages/zh/hosted-service/query-hosted-service.mdx +++ b/pages/zh/hosted-service/query-hosted-service.mdx @@ -1,14 +1,14 @@ --- -title: Query the Hosted Service +title: 查询托管服务 --- -With the subgraph deployed, visit the [Hosted Service](https://thegraph.com/hosted-service/) to open up a [GraphiQL](https://github.com/graphql/graphiql) interface where you can explore the deployed GraphQL API for the subgraph by issuing queries and viewing the schema. +部署子图后,请访问[托管服务](https://thegraph.com/hosted-service/) 以打开 [GraphiQL](https://github.com/graphql/graphiql) 界面,您可以在其中通过发出查询和查看数据模式来探索已经部署的子图的 GraphQL API。 -An example is provided below, but please see the [Query API](/developer/graphql-api) for a complete reference on how to query the subgraph's entities. +下面提供了一个示例,但请参阅 [查询 API ](/developer/graphql-api) 以获取有关如何查询子图实体的完整参考。 -#### Example +#### 示例 -This query lists all the counters our mapping has created. Since we only create one, the result will only contain our one `default-counter`: +此查询列出了我们的映射创建的所有计数器。 由于我们只创建一个,结果将只包含我们的一个 `默认计数器`: ```graphql { @@ -19,10 +19,10 @@ This query lists all the counters our mapping has created. Since we only create } ``` -## Using The Hosted Service +## 使用托管服务 -The Graph Explorer and its GraphQL playground is a useful way to explore and query deployed subgraphs on the Hosted Service. +Graph Explorer 及其 GraphQL playground是探索和查询托管服务上部署的子图的有用方式。 -Some of the main features are detailed below: +下面详细介绍了一些主要功能: -![Explorer Playground](/img/explorer-playground.png) +![探索Playground](/img/explorer-playground.png) From 8e6f47a6843c038b5394e45144bd548c357a40c3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:07 -0500 Subject: [PATCH 194/241] New translations what-is-hosted-service.mdx (Spanish) --- .../hosted-service/what-is-hosted-service.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/pages/es/hosted-service/what-is-hosted-service.mdx b/pages/es/hosted-service/what-is-hosted-service.mdx index 7f604c8dc31a..03b41d6578b5 100644 --- a/pages/es/hosted-service/what-is-hosted-service.mdx +++ b/pages/es/hosted-service/what-is-hosted-service.mdx @@ -1,20 +1,20 @@ --- -title: What is the Hosted Service? +title: '¿Qué es el Servicio Alojado?' --- -This section will walk you through deploying a subgraph to the Hosted Service, otherwise known as the [Hosted Service.](https://thegraph.com/hosted-service/) As a reminder, the Hosted Service will not be shut down soon. We will gradually sunset the Hosted Service once we reach feature parity with the decentralized network. Your subgraphs deployed on the Hosted Service are still available [here.](https://thegraph.com/hosted-service/) +Esta sección te guiará a través del despliegue de un subgrafo en el Servicio Alojado, también conocido como [Servicio Alojado.](https://thegraph.com/hosted-service/) Como recordatorio, el Servicio Alojado no se cerrará pronto. El Servicio Alojado desaparecerá gradualmente cuando alcancemos la paridad de características con la red descentralizada. Tus subgrafos desplegados en el Servicio Alojado siguen disponibles [aquí.](https://thegraph.com/hosted-service/) -If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. +Si no tienes una cuenta en el Servicio Alojado, puedes registrarte con tu cuenta de Github. Una vez que te autentiques, puedes empezar a crear subgrafos a través de la interfaz de usuario y desplegarlos desde tu terminal. Graph Node admite varias redes de prueba de Ethereum (Rinkeby, Ropsten, Kovan) además de la red principal. -## Create a Subgraph +## Crear un Subgrafo -First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` +Primero sigue las instrucciones [aquí](/developer/define-subgraph-hosted) para instalar the Graph CLI. Crea un subgrafo pasando `graph init --product hosted service` -### From an Existing Contract +### De un Contrato Existente -If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from this contract can be a good way to get started on the Hosted Service. +Si ya tienes un contrato inteligente desplegado en la red principal de Ethereum o en una de las redes de prueba, el arranque de un nuevo subgrafo a partir de este contrato puede ser una buena manera de empezar a utilizar el Servicio Alojado. -You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from [Etherscan](https://etherscan.io/). +Puedes utilizar este comando para crear un subgrafo que indexe todos los eventos de un contrato existente. Esto intentará obtener el contrato ABI de [Etherscan](https://etherscan.io/). ```sh graph init \ @@ -23,28 +23,28 @@ graph init \ / [] ``` -Additionally, you can use the following optional arguments. If the ABI cannot be fetched from Etherscan, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form. +Además, puedes utilizar los siguientes argumentos opcionales. Si la ABI no puede ser obtenida de Etherscan, vuelve a solicitar una ruta de archivo local. Si falta algún argumento opcional en el comando, éste te lleva a través de un formulario interactivo. ```sh --network \ --abi \ ``` -The `` in this case is your github user or organization name, `` is the name for your subgraph, and `` is the optional name of the directory where graph init will put the example subgraph manifest. The `` is the address of your existing contract. `` is the name of the Ethereum network that the contract lives on. `` is a local path to a contract ABI file. **Both --network and --abi are optional.** +El ``en este caso es tu nombre de usuario u organización de github, `` es el nombre para tu subgrafo, y `` es el nombre opcional del directorio donde graph init pondrá el manifiesto del subgrafo de ejemplo. El `` es la dirección de tu contrato existente. `` es el nombre de la red Ethereum en la que está activo el contrato. `` es una ruta local a un archivo ABI del contrato. **Tanto --network como --abi son opcionales** -### From an Example Subgraph +### De un Subgrafo de Ejemplo -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +El segundo modo que admite `graph init` es la creación de un nuevo proyecto a partir de un subgrafo de ejemplo. El siguiente comando lo hace: ``` graph init --from-example --product hosted-service / [] ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. +El subgrafo de ejemplo se basa en el contrato Gravity de Dani Grant que gestiona los avatares de los usuarios y emite `NewGravatar` o `UpdateGravatar` cada vez que se crean o actualizan los avatares. El subgrafo maneja estos eventos escribiendo entidades `Gravatar` en el almacén de the Graph Node y asegurándose de que éstas se actualicen según los eventos. Continúa con el [manifiesto del subgrafo](/developer/create-subgraph-hosted#the-subgraph-manifest) para entender mejor a qué eventos de tus contratos inteligentes hay que prestar atención, los mapeos y mucho más. -## Supported Networks on the Hosted Service +## Redes Admitidas en el Servicio Alojado -Please note that the following networks are supported on the Hosted Service. Networks outside of Ethereum mainnet ('mainnet') are not currently supported on [The Graph Explorer.](https://thegraph.com/explorer) +Ten en cuenta que las siguientes redes son admitidas en el Servicio Alojado. Las redes fuera de la red principal de Ethereum ('mainnet') no son actualmente admitidas en [The Graph Explorer.](https://thegraph.com/explorer) - `mainnet` - `kovan` From f2995f97824fdbc83dc22652df365f793308d72e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:08 -0500 Subject: [PATCH 195/241] New translations what-is-hosted-service.mdx (Arabic) --- pages/ar/hosted-service/what-is-hosted-service.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/hosted-service/what-is-hosted-service.mdx b/pages/ar/hosted-service/what-is-hosted-service.mdx index 7f604c8dc31a..fbd2e4687b57 100644 --- a/pages/ar/hosted-service/what-is-hosted-service.mdx +++ b/pages/ar/hosted-service/what-is-hosted-service.mdx @@ -6,7 +6,7 @@ This section will walk you through deploying a subgraph to the Hosted Service, o If you don't have an account on the Hosted Service, you can signup with your Github account. Once you authenticate, you can start creating subgraphs through the UI and deploying them from your terminal. Graph Node supports a number of Ethereum testnets (Rinkeby, Ropsten, Kovan) in addition to mainnet. -## Create a Subgraph +## إنشاء الـ Subgraph First follow the instructions [here](/developer/define-subgraph-hosted) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted service` @@ -34,13 +34,13 @@ The `` in this case is your github user or organization name, `/ [] ``` -The example subgraph is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. +يعتمد مثال الـ subgraph على عقد Gravity بواسطة Dani Grant الذي يدير avatars للمستخدم ويصدر أحداث `NewGravatar` أو `UpdateGravatar` كلما تم إنشاء avatars أو تحديثها. يعالج الـ subgraph هذه الأحداث عن طريق كتابة كيانات `Gravatar` إلى مخزن Graph Node والتأكد من تحديثها وفقا للأحداث. Continue on to the [subgraph manifest](/developer/create-subgraph-hosted#the-subgraph-manifest) to better understand which events from your smart contracts to pay attention to, mappings, and more. ## Supported Networks on the Hosted Service From dc0eed9a86bbfd30071654703b2914a9995d1543 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:10 -0500 Subject: [PATCH 196/241] New translations deploy-subgraph-studio.mdx (Chinese Simplified) --- pages/zh/studio/deploy-subgraph-studio.mdx | 48 +++++++++++----------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/pages/zh/studio/deploy-subgraph-studio.mdx b/pages/zh/studio/deploy-subgraph-studio.mdx index 2155d8fe8976..62f614ab7d15 100644 --- a/pages/zh/studio/deploy-subgraph-studio.mdx +++ b/pages/zh/studio/deploy-subgraph-studio.mdx @@ -1,68 +1,68 @@ --- -title: Deploy a Subgraph to the Subgraph Studio +title: 将一个子图部署到子图工作室 --- -Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: +将一个子图部署到子图工作室是非常简单的。 你可以通过以下步骤完成: -- Install The Graph CLI (with both yarn and npm) -- Create your Subgraph in the Subgraph Studio -- Authenticate your account from the CLI -- Deploying a Subgraph to the Subgraph Studio +- 安装Graph CLI(同时使用yarn和npm)。 +- 在子图工作室中创建你的子图 +- 从CLI认证你的账户 +- 将一个子图部署到子图工作室 -## Installing Graph CLI +## 安装Graph CLI -We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. +我们使用相同的CLI将子图部署到我们的 [托管服务](https://thegraph.com/hosted-service/) 和[Subgraph Studio](https://thegraph.com/studio/)中。 以下是安装graph-cli的命令。 这可以用npm或yarn来完成。 -**Install with yarn:** +**用yarn安装:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +**用npm安装:** ```bash npm install -g @graphprotocol/graph-cli ``` -## Create your Subgraph in Subgraph Studio +## 在子图工作室中创建你的子图 -Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. +在部署你的实际子图之前,你需要在 [子图工作室](https://thegraph.com/studio/)中创建一个子图。 我们建议你阅读我们的[Studio文档](/studio/subgraph-studio)以了解更多这方面的信息。 -## Initialize your Subgraph +## 初始化你的子图 -Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: +一旦你的子图在子图工作室中被创建,你可以用这个命令初始化子图代码。 ```bash graph init --studio ``` -The `` value can be found on your subgraph details page in Subgraph Studio: +``值可以在Subgraph Studio中你的子图详情页上找到。 ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network and abi that you want to query. Doing this will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +运行`graph init`后,你会被要求输入你想查询的合同地址、网络和abi。 这样做将在你的本地机器上生成一个新的文件夹,里面有一些基本代码,可以开始在你的子图上工作。 然后,你可以最终确定你的子图,以确保它按预期工作。 -## Graph Auth +## Graph 认证 -Before being able to deploy your subgraph to Subgraph Studio, you need to login to your account within the CLI. To do this, you will need your deploy key that you can find on your "My Subgraphs" page or on your subgraph details page. +在能够将你的子图部署到子图工作室之前,你需要在CLI中登录到你的账户。 要做到这一点,你将需要你的部署密钥,你可以在你的 "我的子图 "页面或子图的详细信息页面上找到。 -Here is the command that you need to use to authenticate from the CLI: +以下是你需要使用的命令,以从CLI进行认证: ```bash graph auth --studio ``` -## Deploying a Subgraph to Subgraph Studio +## 将一个子图部署到子图工作室 -Once you are ready, you can deploy your subgraph to Subgraph Studio. Doing this won't publish your subgraph to the decentralized network, it will only deploy it to your Studio account where you will be able to test it and update the metadata. +一旦你准备好了,你可以将你的子图部署到子图工作室。 这样做不会将你的子图发布到去中心化的网络中,它只会将它部署到你的Studio账户中,在那里你将能够测试它并更新元数据。 -Here is the CLI command that you need to use to deploy your subgraph. +这里是你需要使用的CLI命令,以部署你的子图。 ```bash graph deploy --studio ``` -After running this command, the CLI will ask for a version label, you can name it however you want, you can use labels such as `0.1` and `0.2` or use letters as well such as `uniswap-v2-0.1` . Those labels will be visible in Graph Explorer and can be used by curators to decide if they want to signal on this version or not, so choose them wisely. +运行这个命令后,CLI会要求提供一个版本标签,你可以随意命名,你可以使用 `0.1`和 `0.2`这样的标签,或者也可以使用字母,如 `uniswap-v2-0.1` . 这些标签将在Graph Explorer中可见,并可由策展人用来决定是否要在这个版本上发出信号,所以要明智地选择它们。 -Once deployed, you can test your subgraph in Subgraph Studio using the playground, deploy another version if needed, update the metadata, and when you are ready, publish your subgraph to Graph Explorer. +一旦部署完毕,你可以在子图工作室中使用控制面板测试你的子图,如果需要的话,可以部署另一个版本,更新元数据,当你准备好后,将你的子图发布到Graph Explorer。 From 7eea84fc56d7313ce01bbef5d10be4429aac1d10 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:11 -0500 Subject: [PATCH 197/241] New translations billing.mdx (Spanish) --- pages/es/studio/billing.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/studio/billing.mdx b/pages/es/studio/billing.mdx index 588cd2ed2f40..9a9d4593cced 100644 --- a/pages/es/studio/billing.mdx +++ b/pages/es/studio/billing.mdx @@ -2,7 +2,7 @@ title: Billing on the Subgraph Studio --- -### Overview +### Descripción Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From 1e29b044036cd1a8af2a73f52b83380503b406be Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:12 -0500 Subject: [PATCH 198/241] New translations billing.mdx (Arabic) --- pages/ar/studio/billing.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ar/studio/billing.mdx b/pages/ar/studio/billing.mdx index 588cd2ed2f40..67a5a8c1420e 100644 --- a/pages/ar/studio/billing.mdx +++ b/pages/ar/studio/billing.mdx @@ -2,7 +2,7 @@ title: Billing on the Subgraph Studio --- -### Overview +### نظره عامة Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From 0abe198b1a371fd9f502b9472635d6a6dc7e5970 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:13 -0500 Subject: [PATCH 199/241] New translations billing.mdx (Japanese) --- pages/ja/studio/billing.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ja/studio/billing.mdx b/pages/ja/studio/billing.mdx index 588cd2ed2f40..7f23343baa17 100644 --- a/pages/ja/studio/billing.mdx +++ b/pages/ja/studio/billing.mdx @@ -2,7 +2,7 @@ title: Billing on the Subgraph Studio --- -### Overview +### 概要 Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From 2afe54178734e39ad7213108bfd00b93f6755cdd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:14 -0500 Subject: [PATCH 200/241] New translations billing.mdx (Korean) --- pages/ko/studio/billing.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ko/studio/billing.mdx b/pages/ko/studio/billing.mdx index 588cd2ed2f40..4788124913d9 100644 --- a/pages/ko/studio/billing.mdx +++ b/pages/ko/studio/billing.mdx @@ -2,7 +2,7 @@ title: Billing on the Subgraph Studio --- -### Overview +### 개요 Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From 6f0ad89bd4fb6b4e4159ab43bee6545cca99f253 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:15 -0500 Subject: [PATCH 201/241] New translations billing.mdx (Chinese Simplified) --- pages/zh/studio/billing.mdx | 64 ++++++++++++++++++------------------- 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/pages/zh/studio/billing.mdx b/pages/zh/studio/billing.mdx index 588cd2ed2f40..ce99acd65775 100644 --- a/pages/zh/studio/billing.mdx +++ b/pages/zh/studio/billing.mdx @@ -1,43 +1,43 @@ --- -title: Billing on the Subgraph Studio +title: 子图工作室的计费 --- -### Overview +### 概述 -Invoices are statements of payment amounts owed by a customer and are typically generated on a weekly basis in the system. You’ll be required to pay fees based on the query fees you generate using your API keys. The billing contract lives on the [Polygon](https://polygon.technology/) network. It’ll allow you to: +发票是客户所欠付款金额的报表,通常在系统中每周生成一次。 你需要根据你使用API密钥产生的查询费用来支付费用。 账单合同在[Polygon](https://polygon.technology/)网络上。 它将允许你: -- Add and remove GRT -- Keep track of your balances based on how much GRT you have added to your account, how much you have removed, and your invoices -- Automatically clear payments based on query fees generated +- 添加和移除GRT +- 根据你向你的账户添加了多少GRT,你移除了多少,以及你的发票来跟踪你的余额。 +- 根据产生的查询费用自动结算付款 -In order to add GRT to your account, you will need to go through the following steps: +为了将GRT添加到你的账户中,你将需要通过以下步骤: -1. Purchase GRT and ETH on an exchange of your choice -2. Send the GRT and ETH to your wallet -3. Bridge GRT to Polygon using the UI +1. 在您选择的交易所购买GRT和ETH +2. 将GRT和ETH发送到你的钱包里 +3. 使用用户界面桥接GRT到Polygon - a) You will receive 0.001 Matic in a few minutes after you send any amount of GRT to the Polygon bridge. You can track the transaction on [Polygonscan](https://polygonscan.com/) by inputting your address into the search bar. + a) 在你向Polygon桥发送任何数量的GRT后,你将在几分钟内收到0.001 Matic。 你可以在搜索栏中输入你的地址,在 [Polygonscan](https://polygonscan.com/)上跟踪交易情况。 -4. Add bridged GRT to the billing contract on Polygon. The billing contract address is: [0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). +4. 在Polygon的计费合同中加入桥接的GRT。 计费合同地址是:[0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE](https://polygonscan.com/address/0x10829DB618E6F520Fa3A01c75bC6dDf8722fA9fE). - a) In order to complete step #4, you'll need to switch your network in your wallet to Polygon. You can add Polygon's network by connecting your wallet and clicking on "Choose Matic (Polygon) Mainnet" [here.](https://chainlist.org/) Once you've added the network, switch it over in your wallet by navigating to the network pill on the top right hand side corner. In Metamask, the network is called **Matic Mainnnet.** + a) 为了完成第4步,你需要将钱包中的网络切换到Polygon。 你可以通过连接你的钱包并点击[这里](https://chainlist.org/) 的 "选择Matic(Polygon)主网 "来添加Polygon的网络。一旦你添加了网络,在你的钱包里通过导航到右上角的网络图标来切换它。 在Metamask中,该网络被称为 **Matic Mainnnet.** -At the end of each week, if you used your API keys, you will receive an invoice based on the query fees you have generated during this period. This invoice will be paid using GRT available in your balance. Query volume is evaluated by the API keys you own. Your balance will be updated after fees are withdrawn. +在每个周末,如果你使用了你的API密钥,你将会收到一张基于你在这期间产生的查询费用的发票。 这张发票将用你余额中的GRT来支付。 查询量是由你拥有的API密钥来评估的。 你的余额将在费用提取后被更新。 -#### Here’s how you go through the invoicing process: +#### 下面是你如何进行开票的过程: -There are 4 states your invoice can be in: +你的发票可以有4种状态: -1. Created - your invoice has just been created and not been paid yet -2. Paid - your invoice has been successfully paid -3. Unpaid - there is not enough GRT in your balance on the billing contract -4. Error - there is an error processing the payment +1. 创建--你的发票刚刚创建,还没有被支付 +2. 已付 - 你的发票已成功支付 +3. 未支付 - 账单合同上你的余额中没有足够的GRT +4. 错误 - 处理付款时出现了错误 -**See the diagram below for more information:** +**更多信息见下图:** ![Billing Flow](/img/billing-flow.png) -For a quick demo of how billing works on the Subgraph Studio, check out the video below: +关于在Subgraph Studio上如何进行计费的快速演示,请看下面的视频。
-### Multisig Users +### 多重签名用户 -Multisigs are smart-contracts that can exist only on the network they have been created, so if you created one on Ethereum Mainnet - it will only exist on Mainnet. Since our billing uses Polygon, if you were to bridge GRT to the multisig address on Polygon the funds would be lost. +多重合约是只能存在于它们所创建的网络上的智能合约,所以如果你在以太坊主网上创建了一个--它将只存在于主网上。 由于我们的账单使用Polygon,如果你将GRT桥接到Polygon的多符号地址上,资金就会丢失。 -To overcome this issue, we created [a dedicated tool](https://multisig-billing.thegraph.com/) that will help you deposit GRT on our billing contract (on behalf of the multisig) with a standard wallet / EOA (an account controlled by a private key). +为了克服这个问题,我们创建了 [一个专门的工具](https://multisig-billing.thegraph.com/),它将帮助你用一个标准的钱包/EOA(一个由私钥控制的账户)在我们的计费合同上存入GRT(代表multisig)。 -You can access our Multisig Billing Tool here: https://multisig-billing.thegraph.com/ +你可以在这里访问我们的Multisig计费工具:https://multisig-billing.thegraph.com/ -This tool will guide you to go through the following steps: +这个工具将指导你完成以下步骤: -1. Connect your standard wallet / EOA (this wallet needs to own some ETH as well as the GRT you want to deposit) -2. Bridge GRT to Polygon. You will have to wait 7-8 minutes after the transaction is complete for the bridge transfer to be finalized. -3. Once your GRT is available on your Polygon balance you can deposit them to the billing contract while specifying the multisig address you are funding in the `Multisig Address` field. +1. 连接你的标准钱包/EOA(这个钱包需要拥有一些ETH以及你要存入的GRT)。 +2. 桥GRT到Polygon。 在交易完成后,你需要等待7-8分钟,以便最终完成桥梁转移。 +3. 一旦你的GRT在你的Polygon余额中可用,你就可以把它们存入账单合同,同时在`Multisig地址栏` 中指定你要资助的multisig地址。 -Once the deposit transaction has been confirmed you can go back to [Subgraph Studio](https://thegraph.com/studio/) and connect with your Gnosis Safe Multisig to create API keys and use them to generate queries. +一旦存款交易得到确认,你就可以回到 [Subgraph Studio](https://thegraph.com/studio/),并与你的Gnosis Safe Multisig连接,以创建API密钥并使用它们来生成查询。 -Those queries will generate invoices that will be paid automatically using the multisig’s billing balance. +这些查询将产生发票,这些发票将使用multisig的账单余额自动支付。 From 90b7bfa3daf04ac92930eeb98b6e9234e0d88aa4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:16 -0500 Subject: [PATCH 202/241] New translations billing.mdx (Vietnamese) --- pages/vi/studio/billing.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/vi/studio/billing.mdx b/pages/vi/studio/billing.mdx index 588cd2ed2f40..fae13d468b27 100644 --- a/pages/vi/studio/billing.mdx +++ b/pages/vi/studio/billing.mdx @@ -43,7 +43,7 @@ For a quick demo of how billing works on the Subgraph Studio, check out the vide From 11e454fbac75272c7eafb98de79fc13f12a4f568 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:17 -0500 Subject: [PATCH 203/241] New translations deploy-subgraph-studio.mdx (Spanish) --- pages/es/studio/deploy-subgraph-studio.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/es/studio/deploy-subgraph-studio.mdx b/pages/es/studio/deploy-subgraph-studio.mdx index 2155d8fe8976..72ca3decc35b 100644 --- a/pages/es/studio/deploy-subgraph-studio.mdx +++ b/pages/es/studio/deploy-subgraph-studio.mdx @@ -1,5 +1,5 @@ --- -title: Deploy a Subgraph to the Subgraph Studio +title: Despliegue de un subgrafo en Subgraph Studio --- Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you through the steps to: @@ -13,13 +13,13 @@ Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. -**Install with yarn:** +**Instalar con yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +**Instalar con npm:** ```bash npm install -g @graphprotocol/graph-cli @@ -29,7 +29,7 @@ npm install -g @graphprotocol/graph-cli Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. -## Initialize your Subgraph +## Inicializa tu Subgrafo Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: From f73da74a258ea055530f2f6adecda771ae8927d9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:18 -0500 Subject: [PATCH 204/241] New translations deploy-subgraph-studio.mdx (Arabic) --- pages/ar/studio/deploy-subgraph-studio.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ar/studio/deploy-subgraph-studio.mdx b/pages/ar/studio/deploy-subgraph-studio.mdx index 2155d8fe8976..b9d406812541 100644 --- a/pages/ar/studio/deploy-subgraph-studio.mdx +++ b/pages/ar/studio/deploy-subgraph-studio.mdx @@ -13,13 +13,13 @@ Deploying a Subgraph to the Subgraph Studio is quite simple. This will take you We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn. -**Install with yarn:** +**التثبيت بواسطة yarn:** ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +**التثبيت بواسطة npm:** ```bash npm install -g @graphprotocol/graph-cli @@ -29,7 +29,7 @@ npm install -g @graphprotocol/graph-cli Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. -## Initialize your Subgraph +## قم بتهيئة Subgraph الخاص بك Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: From 1d0aaebf10c0dbc57dbd546b9da57fcb91a67260 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:19 -0500 Subject: [PATCH 205/241] New translations deploy-subgraph-studio.mdx (Japanese) --- pages/ja/studio/deploy-subgraph-studio.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ja/studio/deploy-subgraph-studio.mdx b/pages/ja/studio/deploy-subgraph-studio.mdx index 2155d8fe8976..69b6786ebda4 100644 --- a/pages/ja/studio/deploy-subgraph-studio.mdx +++ b/pages/ja/studio/deploy-subgraph-studio.mdx @@ -29,7 +29,7 @@ npm install -g @graphprotocol/graph-cli Before deploying your actual subgraph you need to create a subgraph in [Subgraph Studio](https://thegraph.com/studio/). We recommend you read our [Studio documentation](/studio/subgraph-studio) to learn more about this. -## Initialize your Subgraph +## サブグラフの初期化 Once your subgraph has been created in Subgraph Studio you can initialize the subgraph code using this command: From 294c08cae34741026a0c304224fe74bd07ec307e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:20 -0500 Subject: [PATCH 206/241] New translations near.mdx (Vietnamese) --- pages/vi/supported-networks/near.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/vi/supported-networks/near.mdx b/pages/vi/supported-networks/near.mdx index 288ac380494c..639b1de21297 100644 --- a/pages/vi/supported-networks/near.mdx +++ b/pages/vi/supported-networks/near.mdx @@ -226,7 +226,7 @@ Here are some example subgraphs for reference: [NEAR Receipts](https://github.com/graphprotocol/example-subgraph/tree/near-receipts-example) -## FAQ +## CÂU HỎI THƯỜNG GẶP ### How does the beta work? From 37cbb8690070faec3af0bba3e88bd3179a8b8306 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:21 -0500 Subject: [PATCH 207/241] New translations curating.mdx (Spanish) --- pages/es/curating.mdx | 104 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/pages/es/curating.mdx b/pages/es/curating.mdx index 85cfcf091c87..425cb5608b6f 100644 --- a/pages/es/curating.mdx +++ b/pages/es/curating.mdx @@ -2,102 +2,102 @@ title: curación --- -Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. +Los curadores son vitales para la economía descentralizada que conforma a The Graph. Ellos utilizan su conocimiento sobre el ecosistema Web3 para calificar y señalar los subgrafos que deben ser indexados en la red de The Graph. A través del explorador, los curadores pueden ver los datos de la red y tomar decisiones sobre la señalización. The Graph Network recompensa a los curadores que señalan subgrafos valiosos para la red ya que ganan una parte de las tarifas de consulta que generan los subgrafos. Los curadores están motivados económicamente a través de la señalización rápida de dichos subgrafos. Estas señales de los curadores son importantes para los Indexadores, quienes luego pueden procesar o indexar los datos de estos subgrafos señalados. -When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. +Al señalar, los curadores pueden decidir entre señalar en una versión específica del subgrafo o hacerlo usando la opción de auto migración. Cuando se señala mediante la auto migración, las acciones de un curador siempre se actualizarán a la última versión publicada por el desarrollador. Si, en cambio, decides señalar una versión específica, las acciones siempre permanecerán en esa versión específica. -Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) +Recuerda que la curación es riesgosa. Por favor, haz una investigación rigurosa para asegurarte de seleccionar los subgrafos en los que confiar. Crear un subgrafo no requiere permiso, por lo que las personas pueden crear subgrafos y llamarlos con el nombre que deseen. Para obtener más orientación sobre los riesgos de la curación, consulta la -## Bonding Curve 101 +## Curva de vinculación 101 -First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. +Primero, demos un paso atrás. Cada subgrafo tiene una curva de vinculación en la que se acuñan las acciones de curación, cuando un usuario agrega una señal ** a ** a la curva. La curva de vinculación de cada subgrafo es única. Las curvas de vinculación están diseñadas para que el precio tras acuñar (mintear) una participación dentro de la curación de un subgrafo aumente linealmente, sobre el número de participaciones acuñadas. -![Price per shares](/img/price-per-share.png) +![Precio por acciones](/img/price-per-share.png) -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: +Como resultado, el precio aumenta linealmente, lo que significa que con el tiempo resultará más caro comprar una participación. A continuación, se muestra un ejemplo de lo que queremos decir; consulta la curva de vinculación a continuación: -![Bonding curve](/img/bonding-curve.png) +![Curva de vinculación](/img/bonding-curve.png) -Consider we have two curators that mint shares for a subgraph: +Imagina que tenemos dos curadores que anclan participaciones dentro de un subgrafo: -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. +- El Curador A es el primero en señalar dentro del subgrafo. Al agregar 120.000 GRT en la curva, pueden acuñar 2000 participaciones. +- La señal del Curador B está en el subgrafo en algún momento posterior al primero. Para recibir la misma cantidad participativa que el Curador A, este deberá agregar 360.000 GRT en la curva. +- Dado que ambos curadores poseen la mitad participativa de dicha curación, recibirían una cantidad igual en las recompensas por ser curador. +- Si alguno de los curadores quemara sus 2000 participaciones, recibirían 360.000 GRT. +- El curador restante recibiría todas las recompensas en ese subgrafo. Si quemaran sus participaciones a fin de retirar sus GRT, recibirían 120.000 GRT. +- **TLDR:** El valor de las participaciones en GRT son determinadas por la curva de vinculación y suelen ser volátiles. Existe la posibilidad de incurrir en grandes pérdidas. La señalización temprana significa que ingresas menos GRT por cada acción. Profundizando un poco, esto significa que ganarás mas recompensas en GRT siendo el primer curador en ese subgrafo que los posteriores en llegar. -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** +En general, una curva de vinculación es una curva matemática que define la relación entre la oferta de tokens y el precio de los activos. Siendo específicos en la curación de subgrafos, **el precio de cada participación del subgrafo aumenta con cada token invertido** y el **precio de cada participación disminuye con cada token vendido.** -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. +En el caso de The Graph, se aprovecha [la implementación de una fórmula por parte de Bancor para la curva de vinculación](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). -## How to Signal +## ¿Cómo señalar? -Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) +Ahora que hemos abarcado los conceptos básicos sobre cómo funciona la curva de vinculación, vamos a enseñarte como señalar un subgrafo. Dentro de la pestaña Curador en el explorador de The Graph, los curadores podrán señalar y anular la señal en ciertos subgrafos basados en las estadísticas de la red. Para una descripción general paso a paso de cómo hacer esto en el explorador, [haz click aquí.](https://thegraph.com/docs/explorer) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +Un curador puede optar por señalar una versión especifica de un subgrafo, o puede optar por que su señal migre automáticamente a la versión de producción mas reciente de ese subgrafo. Ambas son estrategias válidas y tienen sus pros y sus contras. -Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Señalar una versión específica es esencialmente útil cuando un subgrafo es usado por múltiples dApps. Una dApp podría necesitar una actualización periódica a fin de que el subgrafo tenga nuevas funciones. Otra dApp podría necesitar una versión de subgrafo mas antigua y bien probada. Luego de la curación inicial, se incurre en una tarifa estándar del 1%. -Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. +Hacer que tu señal migre automáticamente a la versión más reciente, puede ser muy bueno si buscas asegurar la mayor cantidad de tarifas por consultas. Cada vez que curas, se incurre en un impuesto de curación del 1%. Además, pagaras un impuesto de curación del 0.5% en cada migración. Se aconseja a los desarrolladores de subgrafos a qué no publiquen nuevas versiones con frecuencia; puesto que deberán pagar una tarifa de curación del 0.5% en todas las acciones de curación migradas automáticamente. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. +> **Nota**: La primer dirección en señalar un subgrafo específico, se considera el primer curador, y éste tendrá que hacer un trabajo mucho más intenso en cuánto al gas, a diferencia del resto de los curadores que vengan después de él, esto debido a que el primer curador comienza los tokens participativos de la curación, inicia la curva de vinculación y también transfiere los tokens dentro del proxy de The Graph. -## What does Signaling mean for The Graph Network? +## ¿Qué significa Señalar para The Graph Network? -For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. +Para que los consumidores finales puedan consultar un subgrafo, primero se debe indexar el subgrafo. La indexación es un proceso en el que los archivos, los datos y los metadatos se examinan, catalogan y luego se indexan para que los resultados se puedan encontrar más rápido. Para que se puedan buscar los datos de un subgrafo, es necesario que esté organizado. -And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. +Por lo tanto, si los Indexadores tuvieran que adivinar qué subgrafos deberían indexar, habría pocas posibilidades de que obtengan tarifas de consulta sólidas porque no tendrían forma de validar qué subgrafos son de buena calidad. Ingrese a la curación. -Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! +Los curadores hacen que la red The Graph sea eficiente y la señalización es el proceso que utilizan los curadores para que los Indexadores sepan que un subgrafo es bueno para indexar, donde los GRT son agregados a la curva de vinculación de un subgrafo. Los Indexadores pueden confiar intrínsecamente en la señal de un curador porque, al señalar, los curadores acuñan una acción de curación para el subgrafo, lo que les da derecho a una parte de las tarifas de consulta futuras que impulsa el subgrafo. La señal del curador se representa como un token ERC20 llamado Graph Curation Shares (GCS). Los curadores que quieran ganar más tarifas por consulta deberán anclar sus GRT a los subgrafos que predicen que generarán un fuerte flujo de tarifas dentro de la red. Los curadores también pueden ganar menos tarifas por consulta si eligen curar o señalar un subgrafo de baja calidad, ya que habrá menos consultas que procesar o menos Indexadores para procesar esas consultas. ¡Mira el siguiente diagrama! -![Signaling diagram](/img/curator-signaling.png) +![Diagrama de Señalización](/img/curator-signaling.png) -Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). +Los Indexadores pueden encontrar subgrafos para indexar en función de las señales de curación que ven en The Graph Explorer (captura de pantalla a continuación). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Subgrafos del Explorador](/img/explorer-subgraphs.png) -## Risks +## Riesgos -1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. El mercado de consultas es inherentemente joven en The Graph y existe el riesgo de que su APY (Rentabilidad anualizada) sea más bajo de lo esperado debido a la dinámica del mercado que recién está empezando. +2. Cuando un curador ancla sus GRT en un subgrafo, deberá pagar un impuesto de curación equivalente al 1%. Esta tarifa se quema y el resto se deposita en el suministro de reserva de la curva de vinculación. +3. Cuando los curadores queman sus acciones para retirar los GRT, se reducirá la participación de GRT de las acciones restantes. Ten en cuenta que, en algunos casos, los curadores pueden decidir quemar sus acciones, **todas al mismo tiempo**. Esta situación puede ser común si un desarrollador de dApp deja de actualizar la aplicación, no sigue consultando su subgrafo o si falla el mismo. Como resultado, es posible que los curadores solo puedan retirar una fracción de sus GRT iniciales. Si buscas un rol dentro red que conlleve menos riesgos, consulta \[Delegators\] (https://thegraph.com/docs/delegating). +4. Un subgrafo puede fallar debido a un error. Un subgrafo fallido no acumula tarifas de consulta. Como resultado, tendrás que esperar hasta que el desarrollador corrija el error e implemente una nueva versión. + - Si estás suscrito a la versión más reciente de un subgrafo, tus acciones se migrarán automáticamente a esa nueva versión. Esto incurrirá en una tarifa de curación del 0.5%. + - Si has señalado en una versión de subgrafo específica y falla, tendrás que quemar manualmente tus acciones de curación. Ten en cuenta que puedes recibir más o menos GRT de los que depositaste inicialmente en la curva de curación, y esto es un riesgo que todo curador acepta al empezar. Luego podrás firmar la nueva versión del subgrafo, incurriendo así en un impuesto de curación equivalente al 1%. -## Curation FAQs +## Preguntas frecuentes sobre Curación -### 1. What % of query fees do Curators earn? +### 1. ¿Qué porcentaje obtienen los curadores de las comisiones por consulta? -By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. +Al señalar un subgrafo, ganarás parte de todas las tarifas de consulta que genera dicho subgrafo. El 10% de todas las tarifas de consulta va destinado a los Curadores y se distribuye proporcionalmente en base a la participación de cada uno. Este 10% está sujeto a gobernanza. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. ¿Cómo decido qué subgrafos son de alta calidad para señalar? -Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Encontrar subgrafos de alta calidad es una tarea compleja, pero se puede abordar de muchas formas diferentes. Como Curador, quieres buscar subgrafos confiables que impulsen el volumen de consultas. Un subgrafo confiable puede ser valioso si es completo, preciso y respalda las necesidades de dicha dApp. Es posible que un subgrafo con una arquitectura deficiente deba revisarse o volver a publicarse, y también puede terminar fallando. Es fundamental que los Curadores revisen la arquitectura o el código de un subgrafo para evaluar si un subgrafo es valioso. Como resultado: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Los curadores pueden usar su conocimiento de una red para intentar predecir cómo un subgrafo puede generar un volumen de consultas mayor o menor a largo plazo +- Los Curadores también deben comprender las métricas que están disponibles a través de Graph Explorer. Las métricas como el volumen de consultas anteriores y quién es el desarrollador del subgrafo pueden ayudar a determinar si vale la pena señalar un subgrafo o no. -### 3. What’s the cost of upgrading a subgraph? +### 3. ¿Cuál es el costo de actualizar un subgrafo? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. +La migración de tus acciones de curación a una nueva versión de subgrafo incurre en un impuesto de curación del 1%. Los Curadores pueden optar por suscribirse a la versión más reciente de un subgrafo. Cuando las acciones de los curadores se migran automáticamente a una nueva versión, los curadores también pagarán la mitad del impuesto de curación, es decir el 0.5%, porque la mejora de los subgrafos es una acción on-chain que requiere cubrir los costos del gas. -### 4. How often can I upgrade my subgraph? +### 4. ¿Con qué frecuencia puedo actualizar mi subgrafo? -It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. +Se sugiere que no actualices tus subgrafos con demasiada frecuencia. Consulta la pregunta anterior para obtener más detalles. -### 5. Can I sell my curation shares? +### 5. ¿Puedo vender mis acciones de curación? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. +Las participaciones de un curador no se pueden "comprar" o "vender" como otros tokens ERC20 con los que seguramente estás familiarizado. Solo pueden anclar (crearse) o quemarse (destruirse) a lo largo de la curva de vinculación de un subgrafo en particular. La cantidad de GRT necesaria para generar una nueva señal y la cantidad de GRT que recibes cuando quemas tu señal existente, está determinada por esa curva de vinculación. Como curador, debes saber que cuando quemas tus acciones de curación para retirar GRT, puedes terminar con más o incluso con menos GRT de los que depositaste en un inicio. -Still confused? Check out our Curation video guide below: +¿Sigues confundido? Te invitamos a echarle un vistazo a nuestra guía en un vídeo que aborda todo sobre la curación:
From 58d7791910b2b11ea3743434df55538a7c8d952b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:22 -0500 Subject: [PATCH 208/241] New translations global.json (Korean) --- pages/ko/global.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ko/global.json b/pages/ko/global.json index 39bf594287dc..8dc8b72d9f86 100644 --- a/pages/ko/global.json +++ b/pages/ko/global.json @@ -1,8 +1,8 @@ { "language": "Language", - "aboutTheGraph": "About The Graph", + "aboutTheGraph": "The Graph 소개", "developer": "개발자", - "supportedNetworks": "Supported Networks", + "supportedNetworks": "지원되는 네트워크", "collapse": "Collapse", "expand": "Expand", "previous": "Previous", From c9b1f73749e8ce7faf8de7c94ee0d49f868734c6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:23 -0500 Subject: [PATCH 209/241] New translations indexing.mdx (Korean) --- pages/ko/indexing.mdx | 350 +++++++++++++++++++++--------------------- 1 file changed, 175 insertions(+), 175 deletions(-) diff --git a/pages/ko/indexing.mdx b/pages/ko/indexing.mdx index ae7c24151872..7485645acff9 100644 --- a/pages/ko/indexing.mdx +++ b/pages/ko/indexing.mdx @@ -4,47 +4,47 @@ title: 인덱싱(indexing) import { Difficulty } from '@/components' -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. +인덱서는 인덱싱 및 쿼리 프로세싱 서비스를 제공하기 위해 더그래프 네트워크 상에 그래프 토큰(GRT)을 스테이킹하는 노드 운용자들입니다. 인덱서는 그들의 서비스에 대한 쿼리 수수료 및 인덱싱 보상을 얻습니다. 더불어, 그들은 Cobbs-Douglas 리베이트 기능에 따라, 그들의 업무에 비례하여 모든 네트워크 기여자와 함께 공유되는 리베이트 풀로부터 발생하는 수익 또한 얻습니다. -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. +프로토콜에 스테이킹된 GRT는 해빙 기간이 적용되며, 인덱서가 악의적으로 응용 프로그램에 잘못된 데이터를 제공하거나 잘못된 인덱싱을 시행하는 경우 슬래싱(삭감) 패널티를 받을 수 있습니다. 또한, 인덱서들은 네트워크에 기여하기 위해서 위임자(Delegator)들로 부터 지분을 위임받을 수도 있습니다. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +인덱서들은 서브그래프의 큐레이션 신호에 따라 인덱싱할 서브그래프를 선택합니다. 여기서 큐레이터는 어느 서브그래프가 고품질인지, 혹은 우선 순위여야 하는지를 표시하기 위해 GRT를 스테이킹합니다. 소비자(예: 애플리케이션)들은 어느 인덱서가 그들의 서브그래프에 대해 쿼리를 처리하게 할 것인지에 대한 매개 변수 및 쿼리 수수료 가격에 대한 선호 내역을 설정할 수도 있습니다. ## FAQ -### What is the minimum stake required to be an indexer on the network? +### 네트워크 상의 인덱서가 되기 위해서 필요한 최소 스테이킹 요구사항은 어떻게 되나요? -The minimum stake for an indexer is currently set to 100K GRT. +인덱서가 되기 위한 최소 스테이킹 수량은 현재 10만 GRT로 설정되어 있습니다. -### What are the revenue streams for an indexer? +### 인덱서는 어떻게 수익을 창출하나요? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Query fee rebates** - 네트워크상에 쿼리를 제공함으로써 발생하는 지불입니다. 이러한 지불은 인덱서와 게이트웨이 간의 상태 채널을 통해 중재됩니다. 게이트웨이의 각 쿼리 요청에는 결제 및 쿼리 결과 유효성에 대한 해당 응답이 포함됩니다. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - 연간 3%의 프로토콜 전체 인플레이션을 통해 생성되는 인덱싱 보상은 네트워크에 대한 서브그래프 배포를 인덱싱하는 인덱서에게 배포됩니다. -### How are rewards distributed? +### 보상은 어떻게 분배되나요? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +인덱싱 보상은 연간 발행량의 3%로 설정된 프로토콜 인플레이션에서 비롯됩니다. 이러한 보상들은 각각에 대한 모든 큐레이션 신호의 비율에 따라 서브그래프들에 배포된 다음 해당 서브그래프에 할당된 지분에 기반하여 인덱서들에게 비례적으로 분배됩니다. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). +보상을 계산하기 위한 수많은 도구들이 커뮤니티에 의해서 생성되었습니다. 여러분들은 [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c)에서 이러한 도구 컬렉션들을 찾으실 수 있습니다. 또한 여러분들은 [Discord](https://discord.gg/vtvv7FP) 의 #delegators 및 #indexers 채널에서 최신 도구 리스트를 찾으실 수 있습니다. -### What is a proof of indexing (POI)? +### 인덱싱 증명(POI)이란 무엇인가요? -POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POI는 네트워크상에서 인덱서가 그들에게 할당된 서브그래프를 인덱싱 하고 있는지 확인하는 데 사용됩니다. 해당 할당이 적절하게 인덱싱 보상을 받을 수 있도록 하기 위하여, 할당을 마감할 당시 현재 에폭의 첫 번째 블록에 대한 POI가 제출되어야합니다. 블록에 대한 POI는 해당 블록까지의 특정 서브그래프 배포에 대한 모든 엔티티 저장소 트랜잭션에 대한 요약입니다. -### When are indexing rewards distributed? +### 인덱싱 보상은 언제 분배되나요? -Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +할당은 활성 상태인 동안에 지속적으로 보상을 누적합니다. 보상들은 인덱서들에 의해 수집되며, 그들의 할당들이 마감될 때 마다 분배됩니다. 이는 인덱서가 강제로 종료하길 원할 때마다 수동으로 발생하거나 28 에폭 후에 위임자(Delegator)가 인덱서 할당을 닫을 수 있지만, 이러한 경우에는 결과적으로 보상이 생성되지 않습니다. 28 에폭은 최대 할당 수명입니다. (현재 한 에폭은 최대 24시간 지속됩니다.) -### Can pending indexer rewards be monitored? +### 보류중인 인덱서 보상은 모니터링 가능한가요? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. +다양한 커뮤니티에 의해 제작된 대시보드들에는 보류중인 보상 가치를 포함하고 있으며, 이들은 다음과 같은 절차들을 통해 수동으로 손쉽게 확인이 가능합니다. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +`getRewards()`:를 호출하기 위해 이더스캔을 사용합니다. -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. 모든 활성화된 활당들에 대한 ID들을 얻기 위해 [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet)를 쿼리합니다. ```graphql query indexerAllocations { @@ -62,57 +62,57 @@ query indexerAllocations { Use Etherscan to call `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- [Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract)의 이더스캔 인터페이스를 살펴봅니다. -* To call `getRewards()`: - - Expand the **10. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +* `getRewards()`:를 호출하기 위해, + - **10번 항목의 getRewards**를 펼칩니다. getRewards dropdown. + - 입력란에 **allocationID**를 입력합니다. + - **Query** 버튼을 클릭합니다. -### What are disputes and where can I view them? +### 분쟁이란 무엇이며 어디에서 볼 수 있나요? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +분쟁 기간 동안 더그래프 상에서 인덱서의 쿼리와 할당은 이의 제기의 요소가 될 수 있습니다. 분쟁 기간은 분쟁의 종류에 따라 다릅니다. 쿼리/귀속 분야에는 7개의 에폭 분쟁 창이 존재하는 반면 할당에는 56개의 에폭이 존재합니다. 이 기간이 지나면 할당이나 쿼리에 대해 분쟁은 발생할 수 없습니다. 분쟁이 열리면 Fishermen에게 최소 10,000 GRT의 디파짓이 요구되며, 이 보증금은 분쟁이 마무리되고 해결이 이루어질 때까지 락업됩니다. Fishermend은 분쟁을 제기한 모든 네트워크 참여자입니다. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +분쟁은 `Disputes` 탭 하부의 인덱서 프로파일 페이지 내의 UI에서 볼 수 있습니다. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- 해당 분쟁이 반려되면, Fishermen이 스테이킹한 GRT가 소각되고 해당 분쟁에서 언급된 인덱서는 슬래싱 삭감을 받지 않습니다. +- 분쟁이 무승부로 결론이 나면, Fishermen들의 예치금은 반환되고 논란이 되고 있는 인덱서는 슬래싱 삭감을 받지 않을 것입니다. +- 만약 해당 분쟁이 받아들여지면, Fishermen들이 예치한 GRT가 반환되고 분쟁 중인 인덱서는 슬래싱 삭감을 받으며, Fishermen은 해당 GRT 삭감 수량의 50%를 얻게 됩니다. Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. -### What are query fee rebates and when are they distributed? +### 쿼리 수수료 리베이트는 무엇이며 언제 배포되나요? -Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. +어떠한 할당이 닫히고, Subgraph의 쿼리 수수료 리베이트 풀에 누적될 때마다 게이트웨이에 쿼리 수수료들이 누적됩니다. 리베이트 풀은 인덱서가 그들이 네트워크에 대해 얻는 쿼리 수수료들의 양에 대략적인 비율의 스테이킹 할당을 장려하도록 설계되었습니다. 특정 Indexer에 지급되는 풀의 쿼리 수수료 비율은 Cobbs-Douglas Production Function을 사용하여 계산됩니다; 각 Indexer에게 분배되는 금액은 풀에 대한 그들의 기여도 및 Subgraph에 대한 지분 할당에 관한 함수관계에 있습니다. -Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. +할당이 종료되고 분쟁 기간이 지나면 인덱서에 의해 리베이트가 청구될 수 있습니다. 리베이트 청구 시 쿼리 수수료들은 리베이트는 queryFeeCut 및 위임 풀 비율에 기반하여, 인덱서와 해당 위임자(Delegator)들에게 분배됩니다. -### What is query fee cut and indexing reward cut? +### query fee cut 및 indexing reward cut는 무엇인가요? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. +`queryFeeCut` 및 `indexingRewardCut` 값은 Indexer가 해당 Indexer와 Delegator 간의 GRT 분배를 제어하기 위해 CooldownBlocks와 함께 설정할 수 있는 위임 매개 변수입니다. 위임 매개변수 설정에 대한 지침을 위해 [Staking in the Protocol](/indexing#stake-in-the-protocol)의 마지막 단계를 참조하시길 바랍니다. -- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. +- **queryFeeCut** - 서브그래프에 축적되어 인덱서에게 분배 될 쿼리 피 리베이트의 비율(%)입니다. 만약 이 값이 95%로 설정된 경우, 해당 인덱서는 어떠한 분배가 청구될 때, 해당 쿼리 수수료 리베이트 풀의 95%를 가져가게 되고, 나머지 5%는 위임자(Delegator)들에게 분배됩니다. -- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. +- **indexingRewardCut** - 서브그래프 상에 축적되어 인덱서에게 분배 될 인덱싱 보상의 비율(%)입니다. 이 값이 95%로 설정된 경우, 할당이 닫힐 때 인덱서는 인덱싱 보상 풀의 95%를 받고, 위임자들은 나머지 5%를 분배받습니다. -### How do indexers know which subgraphs to index? +### 인덱서는 인덱싱할 서브그래프를 어떻게 알 수 있습니까? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +인덱서는 서브그래프 인덱싱 결정을 위한 고급 기술을 적용하여 스스로 차별화가 가능하지만, 일반적인 아이디어를 제공하기 위해 네트워크에서 서브그래프를 평가하는 데 사용되는 몇 가지 주요 메트릭스에 대해 설명하겠습니다. -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **큐레이션 신호** - 특정 서브그래프에 적용되는 네트워크 큐레이션 신호의 비율은 특히 쿼리 볼류밍이 증가하는 부트스트랩 단계 동안 해당 서브그래프에 대한 관심을 나타내는 좋은 지표가 됩니다. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **축적된 쿼리 수수료** - 특정 서브그래프에 대해 수집된 쿼리 수수료 양에 대한 과거 데이터는 미래 수요를 나타내는 좋은 지표입니다. -- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **스테이킹 수량** - 다른 인덱서의 동작을 모니터링하거나 특정 서브그래프에 할당된 총 지분 비율을 살펴보면 인덱서가 서브그래프 쿼리에 대한 공급 측을 모니터링하여 네트워크가 신뢰하는 서브그래프 또는 더 많은 공급을 필요로 하는 서브그래프를 식별할 수 있습니다. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **인덱싱 보상이 없는 서브그래프** - 일부 서브그래프는 IPFS와 같은 지원되지 않는 기능을 사용하거나 메인넷 외부의 다른 네트워크를 쿼리하기 때문에 인덱싱 보상을 생성하지 않습니다. 만약 서브그래프가 인덱싱 보상을 생성하지 않을 경우, 여러분들은 서브그래프 상에서 메세지를 보게 될 것입니다. -### What are the hardware requirements? +### 하드웨어 요구사항은 어떻게되나요? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Small** - 몇몇 서브그래프들에 대한 인덱싱을 시작하기에는 충분하지만, 추후에 더 개선해야할 가능성이 존재합니다. +- **Standard** - 기본 설정이며, 이는 k8s/terraform 배포 매니페스트에서 사용됩니다. +- **Medium** - 100개의 Subgraph 및 초당 200 - 500개의 요청을 서포트 할 수 있는 프로덕션 인덱서입니다. +- **Large** - 현재 사용되는 모든 서브그래프들 및 관련 트레픽 요청의 처리에 대한 요건을 충족합니다. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| @@ -121,48 +121,48 @@ Indexers may differentiate themselves by applying advanced techniques for making | Medium | 16 | 64 | 2 | 32 | 64 | | Large | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an indexer should take? +### 인덱서가 취해야 할 기본적인 보안 예방 조치는 무엇인가요? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. +- **운영자 지갑** - 운영자 지갑을 설정하면 인덱서가 지분을 제어하는 키와 일상적인 작업을 제어하는 키를 분리할 수 있으므로 중요한 예방 조치가 됩니다. 자세한 내용은 [Stake in Protocol](/indexing#stake-in-the-protocol)를 읽어보시기 바랍니다. -- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **중요사항**: 포트들이 공공연하게 공개되는 것에 각별한 주의를 기울이시길 바랍니다. - **어드민 포트**는 반드시 잠겨있어야 합니다. 이는 아래에 자세히 설명된 더그래프 노드 JSON-RPC 및 인덱서 관리 엔드포인트가 포함됩니다. ## Infrastructure -At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. +인덱서 인프라의 중심에는 이더리움을 모니터링하고, 서브그래프 정의에 따라 데이터를 추출하고 로드하여 [GraphQL API](/about/introduction#how-the-graph-works)로 제공하는 그래프 노드가 있습니다. 더그래프 노드는 Ethereum EVM 노드 엔드포인트들과 IPFS 노드(데이터 소싱)에 연결되어야 합니다. 이는 해당 스토리지의 PostgreSQL 데이터베이스 및 네트워크와의 상호 작용을 용이하게 하는 인덱서 구성 요소들입니다. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. +- **PostgreSQLPostgreSQL database** - 더그래프 노드의 메인 스토어입니다. 이곳에 서브그래프의 데이터가 저장됩니다. 또한 인덱서서비스 및 에이전트는 데이터베이스를 사용하여 상태 채널 데이터, 비용 모델 및 인덱싱 규칙을 저장합니다. -- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. +- **이더리움 앤드포인트** - 이더리움JSON-RPC API를 노출하는 앤드포인트입니다. 이는 단일 이더리움 클라이언트의 형태를 취하거나 다중에 걸친 로드 밸런싱이 보다 복잡한 설정이 될 수 있습니다. 특정 서브그래프는 Achive mode 및 API 추적 등 특정 이더리움 클라이언트 기능을 필요로 할 것이라는 점을 유념하는 것이 중요합니다. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS 노드(5 미만 버젼)** - 서브그래프 배포 메타데이터는 IPFS네트워크에 보존됩니다. 더그래프 노드는 주로 서브그래프 배포 중에 IPFS 노드에 액세스하여 서브그래프 매니페스트와 연결된 모든 파일을 가져옵니다. 네트워크 인덱서는 자체 IPFS 노드를 호스트할 필요가 없으며 네트워크의 IPFS 노드는 https://ipfs.network.thegraph.com에서 호스팅됩니다. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **인덱서 서비스** - 네트워크와의 모든 필수 외부 커뮤니케이션을 처리합니다. 비용 모델과 인덱싱 상태를 공유하고, 게이트웨이에서 그래프 노드로 쿼리 요청을 전달하며, 게이트웨이를 사용하여 상태 채널을 통해 쿼리 결제를 관리합니다. -- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. +- **인덱서 에이전트** - 네트워크에 등록, 그래프 노드에 대한 서브그래프 배포관리 및 할당 관리를 포함하여 체인에 상에서 인덱서 상호작용을 용이하게 합니다. Prometheus metrics 서버 – 더그래프 노드 및 인덱서 구성요소는 매트릭스 서버에 그들의 매트릭스를 기록합니다. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +참고: 신속한 확장성을 지원하기 위해 쿼리 노드와 인덱스 노드 등 서로 다른 노드 세트간에쿼리 및 인덱싱 문제를 구분할 것을 권고합니다. -### Ports overview +### 포트 개요 -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. +> **Firewall** - 오직 인덱서 서비스만 공개적으로 노출되어야 하며 관리 포트 및 데이터베이스 액세스를 잠그는데 특히 주의해야 합니다. 그래프 노드 JSON-RPC 엔드포인트(기본 포트: 8030), 인덱서 관리 API 엔드포인트(기본 포트: 18000), Postgres 데이터베이스 엔드포인트(기본 포트: 5432)는 노출되지 않아야 합니다. -#### Graph Node +#### 그래프 노드 -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------------ | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -170,25 +170,25 @@ Note: To support agile scaling, it is recommended that query and indexing concer | ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | | 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Google Cloud상의 Terraform을 사용한 서버 인프라 구축 -#### Install prerequisites +#### 필수 구성요소 설치 - Google Cloud SDK - Kubectl command line tool - Terraform -#### Create a Google Cloud Project +#### Google Cloud Project 생성 -- Clone or navigate to the indexer repository. +- Indexer 저장소 복제 혹은 탐색 -- Navigate to the ./terraform directory, this is where all commands should be executed. +- ./terraform 디렉토리로 이동. 여기서 모든 명령들이 실행되어야 합니다. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Google Cloud에 인증을 한 후, 새 프로젝트를 생성합니다. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- 새로운 프로젝트에 대한 결제를 가능하게 하기 위해 Google Cloud Console의 결제 페이지를 사용합니다 -- Create a Google Cloud configuration. +- Google Cloud 구성을 생성합니다. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- 요구되는 Google Cloud API들을 사용 가능하도록 설정합니다. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- 서비스 계정을 생성합니다. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- 다음 단계에서 작성될 데이터베이스와 Kubernetes 클러스터 간 피어링을 사용하도록 설정합니다. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- 최소 terraform 구성 파일을 생성합니다(필요에 따라 업데이트). ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### 인프라 생성을 위한 Terraform 사용 -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +어떠한 명령이라도 실행하기 전에 [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) 를 읽고, 이 디렉토리에서 `terraform.tfvars` 을 생성합니다. (혹은 이전 단계에서 우리가 생성한 파일을 수정하여 사용하셔도 됩니다.) 기본값을 재정의하거나 값을 설정해야 하는 각 변수에 대해 `terraform.tfvars`에 설정값을 입력합니다. -- Run the following commands to create the infrastructure. +- 인프라 구성을 위해 아래의 명령어들을 실행합니다. ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +`kubectl apply -k $dir`의 모든 리소스들을 배포합니다. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the indexer +#### 인덱서를 위한 Kubernetes 구성요소 생성 -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- `k8s/overlays`디렉토리를 새로운 `$dir,` 디렉토리에 복사합니다. 그리고 `bases` 엔트리를`$dir/kustomization.yaml` 로 조정하여 `k8s/base`디렉토리로 지정하게 합니다. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- `$dir`의 모든 파일을 읽고 코멘트에 표시된 대로 값을 조정합니다. Deploy all resources with `kubectl apply -k $dir`. -### Graph Node +### 그래프 노드 -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[그래프 노드](https://github.com/graphprotocol/graph-node) 는 이벤트가 Ethereum 블록 체인을 소싱하여 GraphQL 엔드포인트를 통해 쿼리할 수 있는 데이터 저장소를 결정적으로 업데이트하는 오픈 소스 러스트 구현입니다. 개발자는 서브그래프를 사용하여 schema를 정의하고, 블록체인과 그래프 노드에서 소싱된 데이터를 변환하기 위한 매핑 세트를 사용하여 전체 체인을 동기화하고, 새로운 블록들을 모니터링하며, GraphQL 엔드포인트를 통해 이를 제공합니다. -#### Getting started from source +#### 소스에서 시작하기 -#### Install prerequisites +#### 필수 구성 요소 설치 - **Rust** @@ -307,7 +307,7 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Ubuntu 유저들에 대한 추가 요구사항** - Ubuntu 상에서 그래프 노드를 운영하기 위해서는 몇 가지 추가 패키지들이 요구됩니다. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config @@ -315,7 +315,7 @@ sudo apt-get install -y clang libpg-dev libssl-dev pkg-config #### Setup -1. Start a PostgreSQL database server +1. PostgreSQL 데이터베이스 서버를 시작합니다. ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. [그래프 노드](https://github.com/graphprotocol/graph-node) repo를 복사하고 `cargo build` 를 실행하여 소스를 구축합니다. -3. Now that all the dependencies are setup, start the Graph Node: +3. 이제 모든 종속 요소들이 설정되었으므로, Graph노드를 시작합니다. ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### 도커를 사용하여 시작하기 -#### Prerequisites +#### 필수 구성요소 -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **이더리움 노드** - 기본적으로 독커 구성 설정은 여러분들의 호스트 머신에 이더리움을 연결하기 위해 [http://host.docker.internal:8545](http://host.docker.internal:8545) 메인넷을 사용할 것입니다. 여러분들은 `docker-compose.yaml`을 업데이트 함으로써 이 네트워크의 이름 및 url을 변경하실 수 있습니다. #### Setup -1. Clone Graph Node and navigate to the Docker directory: +1. 그래프 노드를 복사하고 docker 디렉토리로 이동합니다. ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: +2. 리눅스 사용자들의 경우에는 `docker-compose.yaml` 내의 `host.docker.internal` 대신 호스트 IP 주소를 사용합니다. 이 때, 아래의 스크립트를 사용합니다. ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. 여러분의 이더리움 엔드포인트에 연결될 로컬 그래프 노드를 시작합니다. ```sh docker-compose up ``` -### Indexer components +### 인덱서 구성요소 -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: +성공적으로 네트워크에 참여하기 위해서는 거의 지속적인 모니터링과 상호작용이 필요하므로, 저희는 인덱서들의 네트워크 참여를 용이하게 하기 위해 Typescript 어플리케이션 제품군을 구축했습니다. 다음과 같은 세 가지 인덱서 구성요소가 존재합니다. -- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. +- **인덱서 에이전트** - 에이전트는 네트워크와 인덱서의 자체 인프라를 모니터링하고 인덱싱 및 할당되는 서브그래프 배포와 각각에 할당되는 양을 관리합니다. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **인덱서 서비스** - 외부에 노출되어야 하는 유일한 구성요소인 이 서비스는 서브그래프 쿼리를 그래프 노드로 전달하고 쿼리 결제를 위한 상태 채널을 관리하며 게이트웨이와 같은 클라이언트에게 중요한 의사 결정 정보를 공유합니다. -- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. +- **Indexer CLI** - 인덱서 에이전트를 관리하기 위한 명령줄 인터페이스입니다. 이는 인덱서들이 비용모델 및 인덱싱 규칙들을 관리할 수 있도록 합니다. -#### Getting started +#### 시작하기 -The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! +인덱서 에이전트 및 인덱서 서비스는 그래프 노드 인프라와 함께 배치되어야 합니다. 인덱서 구성 요소를 위한 가상 실행 환경을 설정하는 방법은 여러 가지가 있습니다. 여기서는 NPM 패키지 또는 소스를 사용하여 baremetal 상에서 실행하거나, Google Cloud Kubernetes Engine의 Kubernetes 및 Docker를 통해 실행하는 방법에 대해 설명합니다. 이러한 설정 예제가 여러분들의 인프라로 잘 적용되지 않을 경우, 참조를 위한 커뮤니티 가이드가 있을 것입니다. [디스코드 채널](https://thegraph.com/discord) 에 방문하셔서 안녕! 이라고 말해보시길 바랍니다. 여러분들의 인덱서 구성 요소들을 시작하기 전에 반드시 [프로토콜 내에 스테이킹](/indexing#stake-in-the-protocol)을 해야 한다는 것을 기억하시길 바랍니다! -#### From NPM packages +#### NPM 패키지를 사용할 경우 ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### 소스를 사용할 경우 ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### 도커를 사용할 경우 -- Pull images from the registry +- 레지스트리에서 이미지 불러오기 ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +**참고**: 콘테이너들을 시작한 이후에, 인덱서 서비스는 [http://localhost:7600](http://localhost:7600)에 접근할 수 있으며, 해당 인덱서 에이전트는 [http://localhost:18000/](http://localhost:18000/)에 인덱서 관리 API를 노출하여야 합니다. ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- 구성요소 실행 ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). +[Google Cloud상 Terraform 사용하여 서버인프라 구축하기](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) 섹션을 참고하시기 바랍니다. -#### Using K8s and Terraform +#### K9s 및 Terraform을 사용할 경우 -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section +인덱서 CLI는 `graph indexer`터미널에 접근할 수 있는 [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli)를 위한 플러그인입니다. -#### Usage +#### 사용 -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **참고**: 모든 런타임 구성 변수는 시작시 명령에 매개변수로 적용되거나 `COMPONENT_NAME_VARIABLE_NAME`(예. `INDEXER_AGENT_ETHEREUM`) 형식의 환경 변수를 사용할 수 있습니다. -#### Indexer agent +#### 인덱서 에이전트 ```sh graph-indexer-agent start \ @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### 인덱서 서비스 ```sh SERVER_HOST=localhost \ @@ -513,7 +513,7 @@ graph-indexer-service start \ | pino-pretty ``` -#### Indexer CLI +#### 인덱서 CLI The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. @@ -522,35 +522,35 @@ graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using indexer CLI +#### 인덱서 CLI를 사용한 인덱서 관리 -The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. +인덱서 에이전트는 해당 인덱서를 대신하여 네트워크와 자동으로 상호 작용하기 위해서는 인덱서로부터의 입력이 필요합니다. 인덱서 에이전트 행동을 정의하는 메커니즘은**인덱싱 규칙**입니다. **인덱싱 규칙**을 사용하여, 인덱서는 인덱싱 하거나 쿼리를 제공하기 위해 서브그래프 선택에 대한 그들의 특별한 전략을 적용할 수 있습니다. 규칙은 에이전트에서 제공하는 GraphQL API를 통해 관리되며 이는 인덱서 관리 API로 알려져 있습니다. **인덱서 관리 API**와 상호작용하기 위해 추천되는 도구는 **Graph CLI**로의 확장인 **Indexer CLI**입니다. -#### Usage +#### 사용 -The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**Indexer CLI**는 일반적으로 포트 포워딩을 통해 인덱서 에이전트에 연결되므로 CLI를 동일한 서버 또는 클러스터에서 실행할 필요가 없습니다. 여러분들의 시작에 도움을 드리고, 컨텍스트를 제공하기 위해 CLI에 대해 간략히 설명하도록 하겠습니다. -- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - 인덱서 관리 API에 연결합니다. 일반적으로 서버에 대한 연결은 포트 포워딩을 통해 열려, CLI는 원격으로 쉽게 작동될 수 있습니다. (예: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. +- `graph indexer rules get [options] ...]` - `all`을 ``로 사용하여 하나 혹은 그 이상의 인덱싱 규칙들을 가져오거나, `global`로 사용하여 글로벌 기본값을 가져옵니다. 추가적인 독립변수 `--merged` 는 글로벌 규칙과 병합되도록 특별한 규칙들을 배포하기 위해 특별히 사용될 수 있습니다. 인덱서 에이전트에 적용되는 방법은 이와 같습니다. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - 하나 혹은 그 이상의 인덱싱 규칙들을 설정합니다. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - 사용 가능한 경우, 서브그래프 배포 인덱싱을 시작하며, 해당`decisionBasis`를 `always`로 설정합니다. 이를 통해 인덱서 에이전트는 항상 그것을 인덱싱하도록 선택합니다. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - 배포에 대한 인덱싱을 정지하며, 해당 `decisionBasis` 를 never로 설정합니다. 이를 통해 인덱싱을 위한 배포들에 관한 결정을 할 때, 이 배포를 건너뜁니다. -- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — `rules`에 배포를 위한 `thedecisionBasis`를 설정합니다. 이를 통해 인덱서 에이전트는 이 배포를 인덱싱할지 여부를 결정하기 위해 인덱싱 규칙들을 사용하게 됩니다. -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +독립변수 `-output`을 사용하여 Output에 규칙들을 나타내는 모든 명령들은 지원되는 출력 형식 중 하나를 선택할 수 있습니다. (`table`, `yaml`, 및 `json`) -#### Indexing rules +#### 인덱싱 규칙 -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +인덱싱 규칙은 글로벌 기본값으로 적용되거나, ID들을 사용하여 특정 서브그래프 배포들에 적용될 수 있습니다. 다른 필드들은 모두 선택사항인 반면에, `deployment`와 `decisionBasis` 영역은 필수사항입니다. 인덱싱 규칙에 `rules`가 `decisionBasis`로 되어있는 경우, 인덱서 에이전트는 해당 규칙에 대한 비지정 임계값을 해당 배포를 위해 네트워크에서 가져온 값과 비교합니다. 서브그래프 배포 값이 어떠한 임계값들 이상(혹은 이하)이면, 이는 인덱싱을 위해 선택됩니다. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +예를 들어, 만약에 해당 글로벌 규칙이 **5** (GRT)의 `minStake`를 포함하면, 5개 이상의 GRT 지분이 할당된 모든 서브그래프들은 인덱싱됩니다. 임계값 규칙들은 `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, 그리고 `minAverageQueryFees`를 포함합니다. -Data model: +데이터 모델: ```graphql type IndexingRule { @@ -573,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### Cost models +#### 비용 모델 -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. 비용모델들은 마켓 및 쿼리 속성을 기반으로 한 쿼리들에 대한 동적 가격 책정을 제공합니다 인덱서 서비스는 쿼리에 응답하려는 각 서브그래프의 게이트웨이와 비용모델을 공유합니다. 결국, 게이트웨이는 쿼리당 인덱서 선택 결정 및 선택된 인덱서와의 지불 협상을 위해 비용 모델을 사용합니다. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Agora 언어는 쿼리들에 대한 비용 모델을 공고하기 위한 유연한 형식을 제공합니다. Agora 가격 모델은 GraphQL 쿼리의 각 최상위 쿼리에 대해 순서대로 실행되는 일련의 성명입니다. 각 최상위 쿼리에 대해 일치하는 첫 번째 성명이 해당 쿼리의 가격을 결정합니다. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +성명은 GraphQL 쿼리를 일치시키는 데 사용되는 술어와 평가 시 비용을 소수점 단위의 GRT로 나타내는 비용 식으로 구성됩니다. 쿼리의 명명된 인수 위치에 있는 값은 술어에서 캡처되어 식에 사용될 수 있습니다. 어떠한 표현식에서 플레이스 홀더들을 위해 전체 내용은 설정 및 대체될 수도 있습니다. -Example cost model: +위의 모델을 사용하는 쿼리 가격책정 예시: ``` # This statement captures the skip value, @@ -596,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +비용 모델 예시: | Query | Price | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### Applying the cost model +#### 해당 비용 모델 적용 -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +비용 모델은 데이터베이스에 저장하기 위해 인덱서 에이전트의 인덱서 관리 API로 비용 모델들을 전달하는 인덱서 CLI를 통해 적용됩니다. 그런 다음 이들에 대한 요청이 있을 때 마다, 해당 인덱서 서비스는 이들을 선정하여 게이트웨이들에 해당 비용 모델들을 제공합니다. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## 네트워크와의 상호작용 -### Stake in the protocol +### 프로토콜에 스테이킹하기 -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ +네트워크에 인덱서로 참여하기 위한 첫 번째 단계는 프로토콜을 승인하고, 자금을 스테이킹하며, 일상적인 프로토콜 상호 작용을 위한 운영자 주소를 설정하는 것(선택적)입니다. \_ **참고**: 본 지침의 목적을 위하여 컨트렉트 상호작용에 리믹스가 사용 되지만,원하시는 툴 사용에 개의치 마시기 바랍니다.([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) 및 [MyCrypto](https://www.mycrypto.com/account)는 알려진 몇 가지 다른 툴입니다.) -Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. +인덱서에 의해 생성된 이후, 건강한 할당은 4가지 상태를 거칩니다. -#### Approve tokens +#### 토큰 승인 -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. 브라우저에서 [Remix app](https://remix.ethereum.org/)을 엽니다. -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. `File Explorer`에 [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json)와 함께 **GraphToken.abi**로 명명된 파일을 생성합니다. -3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. +3. 해당 에디터에서 선택되고 열린 `GraphToken.abi`를 통해 Remix 인터페이스에서 `Deploy` 및 `Run Transactions` 섹션으로 전환합니다. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. 환경에서 `Injected Web3`를 선택하고, `Account`에서 여러분의 인덱서 주소를 선택합니다. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. - `At Address`옆에 그래프 토큰 컨트렉트 주소를 붙여 넣습니다.(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) 이후 `At address` 버튼을 클릭하여 적용합니다. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. 스테이킹 컨트렉트를 승인하기 위해 `approve(spender, amount)` 기능을 불러옵니다. `spender`에 스테이킹 컨트렉트 주소 (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`)를 채워넣고, `amount`에 스테이킹 할 토큰과 함께 수량을 입력합니다. -#### Stake tokens +#### 토큰 스테이킹 -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. 브라우저에서 [Remix app](https://remix.ethereum.org/)을 엽니다. -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. `File Explorer`에 staking ABI와 함께 **Staking.abi**로 명명된 파일을 생성합니다. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. +3. 에디터에서 선택되고 열린 `Staking.abi`를 통해, Remix 인터페이스에서 `Deploy` 및 `Run Transactions` 섹션으로 전환합니다. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. 환경에서 `Injected Web3`를 선택하고, `Account`에서 여러분의 인덱서 주소를 선택합니다. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. - `At Address` 옆에 스테이킹 컨트렉트 주소(`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) 를 붙여넣고, `At address`버튼을 클릭하여 적용합니다. -6. Call `stake()` to stake GRT in the protocol. +6. 프로토콜에 GRT를 스테이킹 하기 위해 `stake()`를 호출합니다. -7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (선택 사항) 인덱서는 자금을 제어하는 키를 서브그래프 할당 및 (유료) 쿼리 제공과 같은 일상적인 작업을 수행하는 키로부터 분리하기 위해 인덱서 인프라의 운영자로 다른 주소를 승인할 수 있습니다. 운영자 설정을 위해 해당 운영자 주소와 함께 `setOperator()`를 호출합니다. -8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (선택사항) Indexer들은 보상의 분배를 제어하고 전략적으로 위임자들을 끌어들이기 위해 그들의 indexingRewardCut(백만 개 당), queryFeecut(백만개 당) 그리고 cooldownBlocks(블록들의 수)를 업데이트 함으로써 그들의 위임 매개 변수를 업데이트 할 수 있습니다. 이를 위해 `setDelegationParameters()`를 호출합니다. 아래의 예제는 쿼리 보상의 95%를 인덱서에게 분배하고, 5%를 위임자들에게 분배하도록 queryFeeCut을 설정하고, 인덱싱 리워드의 60%를 Indexer에게 분배하고, 40%를 위임자들에게 분배하도록 설정하며, `thecooldownBlocks`의 기간을 500블록으로 설정합니다. ``` setDelegationParameters(950000, 600000, 500) ``` -### The life of an allocation +### 할당의 수명 After being created by an indexer a healthy allocation goes through four states. -- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. +- **활성** - 어떠한 할당이 온체인상에 생성되면([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)), 이는 **활성**으로 간주됩니다. 인덱서 자체 및/또는 위임된 지분 일부가 서브그래프 배포에 할당되고, 이는 그들이 인덱싱 보상을 청구하고 해당 서브그래프 배포에 대한 쿼리를 제공할 수 있도록 합니다. 해당 인덱서 에이전트는 인덱서 규칙에 의거하여 할당 생성을 관리합니다. -- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). +- **종료** - 인덱서는 1 Epoch가 지나면 할당을 종료할 수 있습니다([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)). 이외에도 해당 인덱서 에이전트는 **maxAllocationEpochs**(현재 28일) 가 지난 후 할당을 자동으로 종료합니다. 유효한 인덱싱 증명(POI)으로 할당이 종료되면 해당 인덱싱 보상이 인덱서 및 해당 위임자들에게 배포됩니다(자세한 내용은 아래의 "보상은 어떻게 분배되나요?" -- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. +- **완결** - 할당이 종료되면 분쟁 기간이 존재하며, 이 분쟁기간 이후에 해당 할당이 **완결**된 것으로 간주되며, 쿼리 수수료 리베이트 또한 클레임(claim()) 가능해집니다. 인덱서 에이전트는 네트워크를 모니터링하여 **완결** 상태인 할당들을 탐지하고 구성 가능한(선택 사항) 임계값인 **—-allocation-claim-threshold**을 초과할 경우 이들을 청구합니다. -- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. +- **청구 완료** - 할당의 최종 상태입니다. - 활성 할당으로 모든 과정을 실행하고, 모든 적격 보상이 배포되었으며 쿼리 수수료 리베이트들이 청구된 상태입니다. From 9e8843da64dbcdc134a6581c9b7b90df16c298b7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:24 -0500 Subject: [PATCH 210/241] New translations indexing.mdx (Chinese Simplified) --- pages/zh/indexing.mdx | 394 +++++++++++++++++++++--------------------- 1 file changed, 197 insertions(+), 197 deletions(-) diff --git a/pages/zh/indexing.mdx b/pages/zh/indexing.mdx index 40d1085c602f..8d398be89d41 100644 --- a/pages/zh/indexing.mdx +++ b/pages/zh/indexing.mdx @@ -4,47 +4,47 @@ title: 索引 import { Difficulty } from '@/components' -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. +索引人是 The Graph 网络中的节点运营商,他们质押 Graph 通证 (GRT) 以提供索引和查询处理服务。 索引人通过他们的服务赚取查询费和索引奖励。 他们还根据 Cobbs-Douglas 回扣函数从回扣池中赚取收益,该回扣池与所有网络贡 ​​ 献者按他们的工作成比例共享。 -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. +抵押在协议中的 GRT 会受到解冻期的影响,如果索引人是恶意的并向应用程序提供不正确的数据或索引不正确,则可能会被削减。 索引人也可以从委托人那里获得委托,为网络做出贡献。 -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +索引人根据子图的策展信号选择要索引的子图,其中策展人质押 GRT 以指示哪些子图是高质量的并应优先考虑。 消费者(例如应用程序)还可以设置索引人处理其子图查询的参数,并设置查询费用定价的偏好。 -## FAQ +## 常见问题 -### What is the minimum stake required to be an indexer on the network? +### 成为网络索引人所需的最低股份是多少? -The minimum stake for an indexer is currently set to 100K GRT. +索引人的最低抵押数量目前设置为 10w 个 GRT。 -### What are the revenue streams for an indexer? +### 索引人的收入来源是什么? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**查询费返利** - 为网络上的查询服务支付的费用. 这些支付通过索引人和网关之间的状态通道进行调解。 These payments are mediated via state channels between an indexer and a gateway. 来自网关的每个查询请求都包含一个支付和相应的响应,一个查询结果有效性的证明。 来自网关的每个查询请求都包含一个支付和相应的响应,一个查询结果有效性的证明。 -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. +**索引奖励** - 通过每年 3%的协议范围通货膨胀产生,索引奖励分配给为网络进行子图部署索引的索引人。 -### How are rewards distributed? +### 奖励如何分配? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +索引奖励来自协议通胀,每年发行量设定为 3%。 它们根据每个子图上所有策展信号的比例分布在子图上,然后根据他们在该子图上分配的股份按比例分配给索引人。 **一项分配必须以符合仲裁章程规定的标准的有效索引证明(POI)来结束,才有资格获得奖励。** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). +社区创建了许多用于计算奖励的工具,您会在 [“社区指南”集合](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c)中找到它们。 您还可以在 [Discord 服务器](https://discord.gg/vtvv7FP)上的 #delegators 和 #indexers 频道 ​​ 中找到最新的工具列表。 -### What is a proof of indexing (POI)? +### 什么是索引证明 (POI)? -POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +网络中使用 POI 来验证索引人是否正在索引它们分配的子图。 在关闭该分配的分配时,必须提交当前时期第一个区块的 POI,才有资格获得索引奖励。 块的 POI 是特定子图部署的所有实体存储事务的摘要,直到并包括该块。 -### When are indexing rewards distributed? +### 索引奖励什么时候发放? -Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +分配在活跃时不断累积奖励。 奖励由索引人收集,并在分配结束时分发。 这可以手动发生,每当索引人想要强制关闭它们时,或者在 28 个时期后,委托人可以关闭索引人的分配,但这会导致没有奖励被铸造。 28 个时期 是最大分配生命周期(现在,一个 时期持续约 24 小时)。 -### Can pending indexer rewards be monitored? +### 可以监控待处理的索引人奖励吗? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. +许多社区制作的仪表板包括待处理的奖励值,可以通过以下步骤轻松地手动检查它们: -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +使用 Etherscan 调用`getRewards()`: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. 查询主网[子图](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) 以获取所有活动分配的 ID: ```graphql query indexerAllocations { @@ -60,135 +60,135 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +使用Etherscan调用 `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- 导航到[奖励合约的 Etherscan 界面](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* To call `getRewards()`: - - Expand the **10. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +* 调用`getRewards()`: + - 展开 **10. getRewards** 下拉菜单。 getRewards dropdown. + - 在输入中输入**分配 ID**. + - 点击**查询**按钮. -### What are disputes and where can I view them? +### 什么是争议? 在哪里可以查看? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +在争议期间,索引人的查询和分配都可以在 The Graph 上进行争议。 争议期限因争议类型而异。 查询/证明有 7 个时期的争议窗口,而分配有 56 个时期。 在这些期限过后,不能对分配或查询提出争议。 当争议开始时,渔夫需要至少 10,000 GRT 的押金,押金将被锁定,直到争议结束并给出解决方案。 渔夫是任何引发争议的网络参与者。 -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +可以在 UI 中的索引人配置文件页面中的 `Disputes` 选项卡下查看争议 。 -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- 如果争议被驳回,渔夫存入的 GRT 将被烧毁,争议的 索引人将不会被削减。 +- 如果以平局方式解决争议,渔夫的押金将被退还,并且争议的索引人不会被削减。 +- 如果争议被接受,渔夫存入的 GRT 将被退回,有争议的 索引人将被削减,渔夫将获得被削减的 GRT 的 50%。 -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +争议可以在用户界面中的 `争议`标签下的索引人档案页中查看。 -### What are query fee rebates and when are they distributed? +### 什么是查询费奖励? 何时发放? -Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. +每当分配关闭并累积在子图的查询费用回扣池中时,网关就会收取查询费用。 回扣池旨在鼓励索引人按他们为网络赚取的查询费用的粗略比例分配股份。 池中分配给特定索引人的查询费用部分使用 Cobbs-Douglas 生产函数计算;每个索引人的分配量是他们对池的贡献和他们在子图上的股份分配的函数。 -Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. +一旦分配已结束且争议期已过,索引人就可以要求回扣。 查询费用回扣根据查询费用减免和委托池比例分配给索引人及其委托人。 -### What is query fee cut and indexing reward cut? +### 什么是查询费减免和索引奖励减免? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. +`queryFeeCut` 和 `indexingRewardCut` 值是委托的参数,该索引可以设置连同 cooldownBlocks 控制 GRT 的索引和他们的委托人之间的分配。 有关设置委托参数的说明,请参阅[协议中的质押](/indexing#stake-in-the-protocol)的最后步骤。 -- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. +- **查询费用削减** - 在将分配给索引人的子图上累积的查询费用回扣的百分比。 如果将其设置为 95%,则在申请分配时,索引人将获得查询费用回扣池的 95%,另外 5% 将分配给委托人。 -- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. +- **索引奖励削减** - 将分配给索引人的子图上累积的索引奖励的百分比。 如果将其设置为 95%,则当分配结束时,索引人将获得索引奖励池的 95%,而委托人将分配其他 5%。 -### How do indexers know which subgraphs to index? +### 索引人如何知道要索引哪些子图? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +索引人基础设施的中心是 Graph 节点,它监控 Ethereum,根据子图定义提取和加载数据,并以 [GraphQL API](/about/introduction#how-the-graph-works)形式为其服务 Graph 节点需要连接到 Ethereum EVM 节点端点,以及 IPFS 节点,用于采购数据;PostgreSQL 数据库用于其存储;以及索引人组件,促进其与网络的交互。 -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **策展信号** - 应用于特定子图的网络策展信号的比例是对该子图兴趣的一个很好的指标,尤其是在引导阶段,当查询量不断上升时。 -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **收取的查询费** - 特定子图收取的查询费的历史数据是未来需求的良好指标。 -- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **质押量** - 监控其他索引人的行为或查看分配给特定子图的总质押量的比例,可以让索引人监控子图查询的供应方,以确定网络显示出信心的子图或可能显示出需要更多供应的子图。 -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **没有索引奖励的子图** - 一些子图不会产生索引奖励,主要是因为它们使用了不受支持的功能,如 IPFS,或者因为它们正在查询主网之外的另一个网络。 如果子图未生成索引奖励,您将在子图上看到一条消息。 -### What are the hardware requirements? +### 对硬件有什么要求? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **小型** - 足以开始索引几个子图,可能需要扩展。 +- **标准** - 默认设置,这是在 k8s/terraform 部署清单示例中使用的。 +- **中型** - 生产型索引人支持 100 个子图和每秒 200-500 个请求。 +- **大型** -准备对当前使用的所有子图进行索引,并为相关流量的请求提供服务 -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| 类型 | (CPU 数量) | (内存 GB) | (硬盘 TB) | (CPU 数量) | (内存 GB) | +| -- |:--------:|:-------:|:-------:|:--------:|:-------:| +| 小型 | 4 | 8 | 1 | 4 | 16 | +| 标准 | 8 | 30 | 1 | 12 | 48 | +| 中型 | 16 | 64 | 2 | 32 | 64 | +| 大型 | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an indexer should take? +### 索引人应该采取哪些基本的安全防范措施? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. +- **操作员钱包** -设置操作员钱包是一项重要的预防措施,因为它允许索引人将控制权益的密钥和控制日常操作的钥匙分开。 有关说明请参见[协议中的内容](/indexing#stake-in-the-protocol) 介绍。 -- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **防火墙** - 只有索引人服务需要公开,尤其要注意锁定管理端口和数据库访问:Graph 节点 JSON-RPC 端点(默认端口:8030)、索引人管理 API 端点(默认端口:18000)和 Postgres 数据库端点(默认端口:5432)不应暴露。 -## Infrastructure +## 基础设施 -At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. +索引人基础设施的中心是Graph节点,它监控以太坊,根据子图定义提取和加载数据,并将其作为[GraphQL API](/about/introduction#how-the-graph-works)提供。 The Graph节点需要连接到以太坊EVM节点端点,以及用于获取数据的IPFS节点;一个用于存储的PostgreSQL数据库;以及促进其与网络互动的索引人组件。 -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. +- **PostgreSQL 数据库** - Graph 节点的主要存储,这是存储子图数据的地方。 索引人服务和代理也使用数据库来存储状态通道数据、成本模型和索引规则。 -- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. +- **Ethereum endpoint** -公开 Ethereum JSON-RPC API 的端点。 这可能采取单个 Ethereum 客户端的形式,也可能是一个更复杂的设置,在多个客户端之间进行负载平衡。 需要注意的是,某些子图将需要特定的 Ethereum 客户端功能,如存档模式和跟踪 API。 -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- ** IPFS 节点(版本小于 5)** - 子图部署元数据存储在 IPFS 网络上。 The Graph节点在子图部署期间主要访问IPFS节点,以获取子图清单和所有链接文件。 网络索引人不需要托管自己的IPFS节点,网络的IPFS节点是托管在https://ipfs.network.thegraph.com。 -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **索引人服务** -处理与网络的所有必要的外部通信。 共享成本模型和索引状态,将来自网关的查询请求传递给一个 Graph 节点,并通过状态通道与网关管理查询支付。 -- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. +- **索引人代理** - 促进索引人在链上的交互,包括在网络上注册,管理子图部署到其 Graph 节点,以及管理分配。 Prometheus 指标服务器- Graph 节点 和 Indexer 组件将其指标记录到指标服务器。 -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +注意:为了支持敏捷扩展,建议在不同的节点集之间分开查询和索引问题:查询节点和索引节点。 -### Ports overview +### 端口概述 -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. +> **重要**: 公开暴露端口时要小心 - **管理端口** 应保持锁定。 这包括下面详述的 Graph 节点 JSON-RPC 和索引人管理端点。 -#### Graph Node +#### Graph 节点 -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| 端口 | 用途 | 路径 | CLI参数 | 环境 变量 | +| ---- | ------------------------------------ | ------------------------------------------------------------------- | ----------------- | ----- | +| 8000 | GraphQL HTTP 服务
(用于子图查询) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(用于子图订阅) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(用于管理部署) | / | --admin-port | - | +| 8030 | 子图索引状态 API | /graphql | --index-node-port | - | +| 8040 | Prometheus 指标 | /metrics | --metrics-port | - | -#### Indexer Service +#### 索引人服务 -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| 端口 | 用途 | 路径 | CLI参数 | 环境 变量 | +| ---- | -------------------------------------- | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP 服务
(用于付费子图查询) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus 指标 | /metrics | --metrics-port | - | -#### Indexer Agent +#### 索引人代理 -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| 端口 | 用途 | 路径 | CLI参数 | 环境
变量 | +| ---- | --------- | -- | ------------------------- | --------------------------------------- | +| 8000 | 索引人管理 API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Google Cloud 上使用 Terraform 建立基础架构 -#### Install prerequisites +#### 安装先决条件 -- Google Cloud SDK -- Kubectl command line tool +- 谷歌云 SDK +- Kubectl 命令行工具 - Terraform -#### Create a Google Cloud Project +#### 创建一个谷歌云项目 -- Clone or navigate to the indexer repository. +- 克隆或导航到索引人存储库。 -- Navigate to the ./terraform directory, this is where all commands should be executed. +- 导航到./terraform 目录,这是所有命令应该执行的地方。 ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- 通过谷歌云认证并创建一个新项目。 ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- 使用 Google Cloud Console 的计费页面为新项目启用计费。 -- Create a Google Cloud configuration. +- 创建谷歌云配置。 ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- 启用所需的 Google Cloud API。 ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- 创建一个服务账户。 ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- 启用将在下一步中创建的数据库和 Kubernetes 集群之间的对等连接。 ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- 创建最小的 terraform 配置文件(根据需要更新)。 ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### 使用 Terraform 创建基础设施 -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +在运行任何命令之前,先阅读 [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) 并在这个目录下创建一个文件`terraform.tfvars`(或者修改我们在上一步创建的文件)。 对于每一个想要覆盖默认值的变量,或者需要设置值的变量,在 `terraform.tfvars`中输入一个设置。 -- Run the following commands to create the infrastructure. +- 运行以下命令来创建基础设施。 ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +用`kubectl apply -k $dir`. 部署所有资源。 ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the indexer +#### 为索引人创建 Kubernetes 组件 -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- 将目录`k8s/overlays` 复制到新的目录 `$dir,` 中,并调整`bases` 中的`$dir/kustomization.yaml`条目,使其指向目录`k8s/base`。 -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- 读取`$dir`中的所有文件,并按照注释中的指示调整任何值。 -Deploy all resources with `kubectl apply -k $dir`. +用以下方法部署所有资源`kubectl apply -k $dir`. -### Graph Node +### Graph 节点 -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph 节点](https://github.com/graphprotocol/graph-node) 是一个开源的 Rust 实现,它将 Ethereum 区块链事件源化,以确定地更新一个数据存储,可以通过 GraphQL 端点进行查询。 开发者使用子图来定义他们的模式,以及一组用于转换区块链来源数据的映射,Graph 节点处理同步整个链,监控新的区块,并通过 GraphQL 端点提供服务。 -#### Getting started from source +#### 从来源开始 -#### Install prerequisites +#### 安装先决条件 - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Ubuntu 用户的附加要求** - 要在 Ubuntu 上运行 Graph 节点,可能需要一些附加的软件包。 ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### 类型 -1. Start a PostgreSQL database server +1. 启动 PostgreSQL 数据库服务器 ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. 克隆[Graph 节点](https://github.com/graphprotocol/graph-node)repo,并通过运行 `cargo build`来构建源代码。 -3. Now that all the dependencies are setup, start the Graph Node: +3. 现在,所有的依赖关系都已设置完毕,启动 Graph 节点。 ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### 使用 Docker -#### Prerequisites +#### 先决条件 -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Ethereum 节点** - 默认情况下,docker 编译设置将使用 mainnet:[http://host.docker.internal:8545](http://host.docker.internal:8545) 连接到主机上的 Ethereum 节点。 你可以通过更新 `docker-compose.yaml`来替换这个网络名和 url。 -#### Setup +#### 安装 -1. Clone Graph Node and navigate to the Docker directory: +1. 克隆 Graph 节点并导航到 Docker 目录。 ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: +2. 仅适用于 linux 用户 - 在`docker-compose.yaml`中使用主机 IP 地址代替 `host.docker.internal`并使用附带的脚本。 ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. 启动一个本地 Graph 节点,它将连接到你的 Ethereum 端点。 ```sh docker-compose up ``` -### Indexer components +### 索引人组件 -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: +要成功地参与网络,需要几乎持续的监控和互动,所以我们建立了一套 Typescript 应用程序,以方便索引人的网络参与。 有三个索引人组件。 -- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. +- **索引人代理** - 代理监控网络和索引人自身的基础设施,并管理哪些子图部署被索引和分配到链上,以及分配到每个子图的数量。 -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **索引人服务** - 唯一需要对外暴露的组件,该服务将子图查询传递给节点,管理查询支付的状态通道,将重要的决策信息分享给网关等客户端。 -- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. +- **索引人 CLI** - 用于管理索引人代理的命令行界面。 它允许索引人管理成本模型和索引规则。 -#### Getting started +#### 开始 -The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! +索引人代理和索引人服务应该与你的 Graph 节点基础架构共同定位。 有很多方法可以为你的索引人组件设置虚拟执行环境,这里我们将解释如何使用 NPM 包或源码在裸机上运行它们,或者通过谷歌云 Kubernetes 引擎上的 kubernetes 和 docker 运行。 如果这些设置实例不能很好地转化为你的基础设施,很可能会有一个社区指南供参考,请到[Discord](https://thegraph.com/discord)上打招呼。 在启动你的索引人组件之前,请记住[在协议中签名](/indexing#stake-in-the-protocol)! -#### From NPM packages +#### 来自 NPM 包 ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### 来自来源 ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### 使用 docker -- Pull images from the registry +- 从注册表中提取图像 ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +**注意**: 启动容器后,索引人服务应该在[http://localhost:7600](http://localhost:7600) 索引人代理应该在[http://localhost:18000/](http://localhost:18000/)。 ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- 运行组件 ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). +请参阅 [在 Google Cloud 上使用 Terraform 设置服务器基础架构](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) 一节。 -#### Using K8s and Terraform +#### 使用 K8s 和 Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section +Indexer CLI 是 [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) 的一个插件,可以在终端的`graph indexer`处访问。 -#### Usage +#### 使用方法 -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **注意**: 所有的运行时配置变量可以在启动时作为参数应用到命令中,也可以使用格式为 `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`) 的环境变量。 -#### Indexer agent +#### 索引代理 ```sh graph-indexer-agent start \ @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### 索引人服务 ```sh SERVER_HOST=localhost \ @@ -513,44 +513,44 @@ graph-indexer-service start \ | pino-pretty ``` -#### Indexer CLI +#### 索引人 CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +Indexer CLI是一个可以在终端访问`graph indexer`的插件,地址是[`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli)。 ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using indexer CLI +#### 使用索引人 CLI 管理索引人 -The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. +索引人代理需要来自索引人的输入,才能代表索引人自主地与网络交互。 定义索引人代理行为的机制是**索引规则**. 使用**索引规则**,索引人可以应用其特定的策略来选择子图进行索引和服务查询。 使用**索引规则** ,索引人可以应用他们特定的策略来挑选子图,为其建立索引和提供查询。 规则是通过由代理提供的 GraphQL API 来管理的,被称为索引人管理 API。 与**索引管理 API**交互的建议工具是 **索引人 CLI** ,它是 **Graph CLI**的扩展。 -#### Usage +#### 使用方法 -The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**索引人 CLI ** 连接到索引人代理,通常是通过端口转发,因此 CLI 不需要运行在同一服务器或集群上。 为了帮助你入门,并提供一些上下文,这里将简要介绍 CLI。 -- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - 连接到索引人管理 API。 通常情况下,与服务器的连接是通过端口转发打开的,所以 CLI 可以很容易地进行远程操作。 (例如: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. +- `graph indexer rules get [options] ...]` -获取一个或多个索引规则,使用 `all` 作为`` 来获取所有规则,或使用 global 来获取全局默认规则。 可以使用额外的参数 `--merged` 来指定将特定部署规则与全局规则合并。 这就是它们在索引人代理中的应用方式。 -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` -设置一个或多个索引规则。 -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - 开始索引子图部署(如果可用),并将其`decisionBasis`设置为`always`, 这样索引人代理将始终选择对其进行索引。 如果全局规则被设置为总是,那么网络上所有可用的子图都将被索引。 -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` -停止对某个部署进行索引,并将其 `decisionBasis`设置为 never, 这样它在决定要索引的部署时就会跳过这个部署。 -- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` —将部署的 `thedecisionBasis`设置为`规则`, 这样索引人代理将使用索引规则来决定是否对这个部署进行索引。 -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +所有在输出中显示规则的命令都可以使用 `-output`参数在支持的输出格式(`table`, `yaml`, and `json`)之间进行选择 -#### Indexing rules +#### 索引规则 -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +索引规则既可以作为全局默认值应用,也可以用于使用其 ID 的特定子图部署。 `deployment` 和 `decisionBasis`字段是强制性的,而所有其他字段都是可选的。 当索引规则`rules` 作为`decisionBasis`时, 索引人代理将比较该规则上的非空阈值与从相应部署的网络获取的值。 如果子图部署的值高于(或低于)任何阈值,它将被选择用于索引。 -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +例如,如果全局规则的`minStake` 值为**5** (GRT), 则分配给它的权益超过 5 (GRT) 的任何子图部署都将被编入索引。 阈值规则包括`maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, 和 `minAverageQueryFees`. -Data model: +数据模型: ```graphql type IndexingRule { @@ -573,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### Cost models +#### 成本模式 -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. +成本模型根据市场和查询属性为查询提供动态定价。 索引服务处与网关共享每个子网的成本模型,它们打算对每个子网的查询作出回应。 而网关则使用成本模型来做出每个查询的索引人选择决定,并与所选的索引人进行付费谈判。 #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Agora 语言提供了一种灵活的格式来声明查询的成本模型。 Agora 价格模型是一系列的语句,它们按照 GraphQL 查询中每个顶层查询的顺序执行。 对于每个顶层查询,第一个与其匹配的语句决定了该查询的价格。 -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +语句由一个用于匹配 GraphQL 查询的谓词和一个成本表达式组成,该表达式在评估时输出一个以十进制 GRT 表示的成本。 查询的命名参数位置中的值可以在谓词中捕获并在表达式中使用。 也可以在表达式中设置全局,并代替占位符。 -Example cost model: +使用上述模型的查询成本计算示例。 ``` # This statement captures the skip value, @@ -596,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +成本模型示例: -| Query | Price | +| 询问 | 价格 | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### Applying the cost model +#### 应用成本模式 -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +成本模型是通过索引人 CLI 应用的,CLI 将它们传递给索引人代理的索引人管理 API,以便存储在数据库中。 然后,索引人服务将接收这些模型,并在网关要求时将成本模型提供给它们。 ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## 与网络的交互 -### Stake in the protocol +### 在协议中进行质押 -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ +作为索引人参与网络的第一步是批准协议、质押资金,以及(可选)设置一个操作员地址以进行日常协议交互。 _ **注意**: 在这些说明中,Remix 将用于合约交互,但请随意使用您选择的工具([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), 和[MyCrypto](https://www.mycrypto.com/account) 是其他一些已知的工具)._ -Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. +被索引人创建后,一个健康的配置会经历四种状态。 -#### Approve tokens +#### 批准令牌 -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. 在浏览器中打开[Remix app](https://remix.ethereum.org/) 。 -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. 使用[token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json).在`File Explorer`文件夹中创建一个名为**GraphToken.abi**的文件。 -3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. +3. 在编辑器中选择`GraphToken.abi` 并打开,切换到部署 `Run Transactions` 选项中。 -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. 环境选择`Injected Web3`并在`Account` 下面选择你的索引人地址。 -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. 设置 GraphToken 合约地址 - 将 GraphToken 地址(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) 粘贴到`At Address` 旁边 ,单击,`At address` 按钮。 -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. 调用`approve(spender, amount)`函数以批准 Staking 合约。 用质押合约地址(`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) 填写`spender` ,`amount` 要质押的代币数量 (in wei). -#### Stake tokens +#### 质押代币 -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. 在浏览器中打开[Remix app](https://remix.ethereum.org/)。 -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. 在 `File Explorer` 创建一个名为**Staking.abi** 的文件中,使用 staking ABI. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. +3. 在编辑器中选择`GraphToken.abi` 并打开,切换到部署 `Run Transactions` 选项中。 -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. 在环境选择`Injected Web3` 然后`Account` s 选择您的索引人地址。 -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. 设置 GraphToken 合约地址 - 将 GraphToken 地址(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) 粘贴到`At Address` 旁边 ,单击,`At address` 按钮。 -6. Call `stake()` to stake GRT in the protocol. +6. 调用 `stake()` 质押 GRT。 -7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (可选)索引人可以批准另一个地址作为其索引人基础设施的操作员,以便将控制资金的密钥与执行日常操作,例如在子图上分配和服务(付费)查询的密钥分开。 用`setOperator()` 设置操作员地址。 -8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. +8. 可选)为了控制奖励的分配和战略性地吸引委托人,索引人可以通过更新他们的索引人奖励削减(百万分之一)、查询费用削减(百万分之一)和冷却周期(块数)来更新他们的委托参数。 使用 `setDelegationParameters()`设置。 以下示例设置查询费用削减将 95% 的查询返利分配给索引人,5% 给委托人,设置索引人奖励削减将 60% 的索引奖励分配给索引人,将 40% 分配给委托人,并将`冷却周期`设置为 500 个区块。 ``` setDelegationParameters(950000, 600000, 500) ``` -### The life of an allocation +### 分配的生命周期 After being created by an indexer a healthy allocation goes through four states. -- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. +- **活跃** -一旦在链上创建分配([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) 它就被认为是**活跃**。 索引人自身和/或被委托的一部分权益被分配给子图部署,这使得他们可以要求索引奖励并为该子图部署提供查询。 索引人代理根据索引人规则管理创建分配。 -- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). +- **关闭** -索引人可以在 1 个纪元过去后自由关闭一个分配([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) ,或者他们的索引人代理将在**maxAllocationEpochs** (当前为 28 天)之后自动关闭该分配。 当一个分配以有效的索引证明(POI) 关闭时,他们的索引奖励将被分配给索引人及其委托人(参见下面的"奖励是如何分配的?"以了解更多)。 -- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. +- **完成** - 一旦一个分配被关闭,就会有一个争议期,之后该分配被认为是 **最终确定**的,它的查询费返利可以被申领(claim())。 索引人代理监视网络以检测**最终**分配,如果它们高于可配置(和可选)阈值--**—-allocation-claim-threshold**,则声明它们。 -- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. +- **申领** - 分配的最终状态;它已经完成了作为活跃分配的过程,所有符合条件的奖励已经分配完毕,其查询费返利也已申领。 From 12c108db18bda908adb448bc9e5fdfb3a074e707 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:25 -0500 Subject: [PATCH 211/241] New translations indexing.mdx (Vietnamese) --- pages/vi/indexing.mdx | 372 +++++++++++++++++++++--------------------- 1 file changed, 186 insertions(+), 186 deletions(-) diff --git a/pages/vi/indexing.mdx b/pages/vi/indexing.mdx index 090b1be2b226..11ddf485c8e6 100644 --- a/pages/vi/indexing.mdx +++ b/pages/vi/indexing.mdx @@ -4,47 +4,47 @@ title: Indexer import { Difficulty } from '@/components' -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. +Indexer là những người vận hành node (node operator) trong Mạng The Graph có stake Graph Token (GRT) để cung cấp các dịch vụ indexing và xử lý truy vấn. Indexers kiếm được phí truy vấn và phần thưởng indexing cho các dịch vụ của họ. Họ cũng kiếm được tiền từ Rebate Pool (Pool Hoàn phí) được chia sẻ với tất cả những người đóng góp trong mạng tỷ lệ thuận với công việc của họ, tuân theo Chức năng Rebate Cobbs-Douglas. -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. +GRT được stake trong giao thức sẽ phải trải qua một khoảng thời gian chờ "tan băng" (thawing period) và có thể bị cắt nếu Indexer có ác ý và cung cấp dữ liệu không chính xác cho các ứng dụng hoặc nếu họ index không chính xác. Indexer cũng có thể được ủy quyền stake từ Delegator, để đóng góp vào mạng. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Indexer chọn các subgraph để index dựa trên tín hiệu curation của subgraph, trong đó Curator stake GRT để chỉ ra subgraph nào có chất lượng cao và cần được ưu tiên. Bên tiêu dùng (ví dụ: ứng dụng) cũng có thể đặt các tham số (parameter) mà Indexer xử lý các truy vấn cho các subgraph của họ và đặt các tùy chọn cho việc định giá phí truy vấn. -## FAQ +## CÂU HỎI THƯỜNG GẶP -### What is the minimum stake required to be an indexer on the network? +### Lượng stake tối thiểu cần thiết để trở thành một indexer trên mạng là bao nhiêu? -The minimum stake for an indexer is currently set to 100K GRT. +Lượng stake tối thiểu cho một indexer hiện được đặt là 100K GRT. -### What are the revenue streams for an indexer? +### Các nguồn doanh thu cho indexer là gì? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Hoàn phí truy vấn** - Thanh toán cho việc phục vụ các truy vấn trên mạng. Các khoản thanh toán này được dàn xếp thông qua các state channel giữa indexer và cổng. Mỗi yêu cầu truy vấn từ một cổng chứa một khoản thanh toán và phản hồi tương ứng là bằng chứng về tính hợp lệ của kết quả truy vấn. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. +**Phần thưởng Indexing** - Được tạo ra thông qua lạm phát trên toàn giao thức hàng năm 3%, phần thưởng indexing được phân phối cho các indexer đang lập chỉ mục các triển khai subgraph cho mạng lưới. -### How are rewards distributed? +### Phần thưởng được phân phối như thế nào? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Phần thưởng Indexing đến từ lạm phát giao thức được đặt thành 3% phát hành hàng năm. Chúng được phân phối trên các subgraph dựa trên tỷ lệ của tất cả các tín hiệu curation trên mỗi subgraph, sau đó được phân phối theo tỷ lệ cho các indexers dựa trên số stake được phân bổ của họ trên subgraph đó. **Việc phân bổ phải được kết thúc với bằng chứng lập chỉ mục (proof of indexing - POI) hợp lệ đáp ứng các tiêu chuẩn do điều lệ trọng tài đặt ra để đủ điều kiện nhận phần thưởng** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). +Nhiều công cụ đã được cộng đồng tạo ra để tính toán phần thưởng; bạn sẽ tìm thấy một bộ sưu tập của chúng được sắp xếp trong [Bộ sưu tập Hướng dẫn cộng đồng](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Bạn cũng có thể tìm thấy danh sách cập nhật mới nhất các công cụ trong các kênh #delegators và #indexers trên [server Discord](https://discord.gg/vtvv7FP). -### What is a proof of indexing (POI)? +### Bằng chứng lập chỉ mục (proof of indexing - POI) là gì? -POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POI được sử dụng trong mạng để xác minh rằng một indexer đang lập chỉ mục các subgraph mà họ đã phân bổ. POI cho khối đầu tiên của epoch hiện tại phải được gửi khi kết thúc phân bổ cho phân bổ đó để đủ điều kiện nhận phần thưởng indexing. POI cho một khối là một thông báo cho tất cả các giao dịch lưu trữ thực thể để triển khai một subgraph cụ thể lên đến và bao gồm khối đó. -### When are indexing rewards distributed? +### Khi nào Phần thưởng indexing được phân phối? -Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Việc phân bổ liên tục tích lũy phần thưởng khi chúng đang hoạt động. Phần thưởng được thu thập bởi các indexer và phân phối bất cứ khi nào việc phân bổ của họ bị đóng lại. Điều đó xảy ra theo cách thủ công, bất cứ khi nào indexer muốn buộc đóng chúng hoặc sau 28 epoch, delegator có thể đóng phân bổ cho indexer, nhưng điều này dẫn đến không có phần thưởng nào được tạo ra. 28 epoch là thời gian tồn tại của phân bổ tối đa (hiện tại, một epoch kéo dài trong ~ 24 giờ). -### Can pending indexer rewards be monitored? +### Có thể giám sát phần thưởng indexer đang chờ xử lý không? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. +Hợp đồng RewardsManager có có một chức năng [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) chỉ đọc có thể được sử dụng để kiểm tra phần thưởng đang chờ để phân bổ cụ thể. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Nhiều trang tổng quan (dashboard) do cộng đồng tạo bao gồm các giá trị phần thưởng đang chờ xử lý và bạn có thể dễ dàng kiểm tra chúng theo cách thủ công bằng cách làm theo các bước sau: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Truy vấn [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) để nhận ID cho tất cả phần phân bổ đang hoạt động: ```graphql query indexerAllocations { @@ -60,59 +60,59 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Sử dụng Etherscan để gọi `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Điều hướng đến [giao diện Etherscan đến hợp đồng Rewards](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* To call `getRewards()`: - - Expand the **10. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +* Để gọi `getRewards()`: + - Mở rộng **10. getRewards** thả xuống. + - Nhập **allocationID** trong đầu vào. + - Nhấn **Nút** Truy vấn. -### What are disputes and where can I view them? +### Tranh chấp là gì và tôi có thể xem chúng ở đâu? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Các truy vấn và phần phân bổ của Indexer đều có thể bị tranh chấp trên The Graph trong thời gian tranh chấp. Thời hạn tranh chấp khác nhau, tùy thuộc vào loại tranh chấp. Truy vấn / chứng thực có cửa sổ tranh chấp 7 epoch (kỷ nguyên), trong khi phần phân bổ có 56 epoch. Sau khi các giai đoạn này trôi qua, không thể mở các tranh chấp đối với phần phân bổ hoặc truy vấn. Khi một tranh chấp được mở ra, các Fisherman yêu cầu một khoản stake tối thiểu là 10.000 GRT, sẽ bị khóa cho đến khi tranh chấp được hoàn tất và giải pháp đã được đưa ra. Fisherman là bất kỳ người tham gia mạng nào mà đã mở ra tranh chấp. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Tranh chấp có **ba** kết quả có thể xảy ra, phần tiền gửi của Fisherman cũng vậy. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Nếu tranh chấp bị từ chối, GRT do Fisherman gửi sẽ bị đốt, và Indexer tranh chấp sẽ không bị phạt cắt giảm (slashed). +- Nếu tranh chấp được giải quyết dưới dạng hòa, tiền gửi của Fisherman sẽ được trả lại, và Indexer bị tranh chấp sẽ không bị phạt cắt giảm (slashed). +- Nếu tranh chấp được chấp nhận, lượng GRT do Fisherman đã gửi sẽ được trả lại, Indexer bị tranh chấp sẽ bị cắt và Fisherman sẽ kiếm được 50% GRT đã bị phạt cắt giảm (slashed). -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +Tranh chấp có thể được xem trong giao diện người dùng trong trang hồ sơ của Indexer trong mục `Tranh chấp`. -### What are query fee rebates and when are they distributed? +### Các khoản hoàn phí truy vấn là gì và chúng được phân phối khi nào? -Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. +Phí truy vấn được cổng thu thập bất cứ khi nào một phần phân bổ được đóng và được tích lũy trong pool hoàn phí truy vấn của subgraph. Pool hoàn phí được thiết kế để khuyến khích Indexer phân bổ stake theo tỷ lệ thô với số phí truy vấn mà họ kiếm được cho mạng. Phần phí truy vấn trong pool được phân bổ cho một indexer cụ thể được tính bằng cách sử dụng Hàm Sản xuất Cobbs-Douglas; số tiền được phân phối cho mỗi indexer là một chức năng của phần đóng góp của họ cho pool và việc phân bổ stake của họ trên subgraph. -Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. +Khi một phần phân bổ đã được đóng và thời gian tranh chấp đã qua, indexer sẽ có thể nhận các khoản hoàn phí. Khi yêu cầu, các khoản hoàn phí truy vấn được phân phối cho indexer và delegator của họ dựa trên mức cắt giảm phí truy vấn và tỷ lệ pool ủy quyền (delegation). -### What is query fee cut and indexing reward cut? +### Cắt giảm phí truy vấn và cắt giảm phần thưởng indexing là gì? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. +Giá trị `queryFeeCut` và `indexingRewardCut` là các tham số delegation mà Indexer có thể đặt cùng với cooldownBlocks để kiểm soát việc phân phối GRT giữa indexer và delegator của họ. Xem các bước cuối cùng trong [Staking trong Giao thức](/indexing#stake-in-the-protocol) để được hướng dẫn về cách thiết lập các tham số delegation. -- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. +- **queryFeeCut** - % hoàn phí truy vấn được tích lũy trên một subgraph sẽ được phân phối cho indexer. Nếu thông số này được đặt là 95%, indexer sẽ nhận được 95% của pool hoàn phí truy vấn khi một phần phân bổ được yêu cầu với 5% còn lại sẽ được chuyển cho delegator. -- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. +- **indexingRewardCut** - % phần thưởng indexing được tích lũy trên một subgraph sẽ được phân phối cho indexer. Nếu thông số này được đặt là 95%, indexer sẽ nhận được 95% của pool phần thưởng indexing khi một phần phân bổ được đóng và các delegator sẽ chia 5% còn lại. -### How do indexers know which subgraphs to index? +### Làm thế nào để indexer biết những subgraph nào cần index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexer có thể tự phân biệt bản thân bằng cách áp dụng các kỹ thuật nâng cao để đưa ra quyết định index subgraph nhưng để đưa ra ý tưởng chung, chúng ta sẽ thảo luận một số số liệu chính được sử dụng để đánh giá các subgraph trong mạng: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Tín hiệu curation** - Tỷ lệ tín hiệu curation mạng được áp dụng cho một subgraph cụ thể là một chỉ báo tốt về mức độ quan tâm đến subgraph đó, đặc biệt là trong giai đoạn khởi động khi khối lượng truy vấn đang tăng lên. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Phí truy vấn đã thu** - Dữ liệu lịch sử về khối lượng phí truy vấn được thu thập cho một subgraph cụ thể là một chỉ báo tốt về nhu cầu trong tương lai. -- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Số tiền được stake** - Việc theo dõi hành vi của những indexer khác hoặc xem xét tỷ lệ tổng stake được phân bổ cho subgraph cụ thể có thể cho phép indexer giám sát phía nguồn cung cho các truy vấn subgraph để xác định các subgraph mà mạng đang thể hiện sự tin cậy hoặc các subgraph có thể cho thấy nhu cầu nguồn cung nhiều hơn. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraph không có phần thưởng indexing** - Một số subgraph không tạo ra phần thưởng indexing chủ yếu vì chúng đang sử dụng các tính năng không được hỗ trợ như IPFS hoặc vì chúng đang truy vấn một mạng khác bên ngoài mainnet. Bạn sẽ thấy một thông báo trên một subgraph nếu nó không tạo ra phần thưởng indexing. -### What are the hardware requirements? +### Có các yêu cầu gì về phần cứng (hardware)? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Nhỏ** - Đủ để bắt đầu index một số subgraph, có thể sẽ cần được mở rộng. +- **Tiêu chuẩn** - Thiết lập mặc định, đây là những gì được sử dụng trong bản kê khai (manifest) triển khai mẫu k8s/terraform. +- **Trung bình** - Công cụ indexing production hỗ trợ 100 đồ subgraph và 200-500 yêu cầu mỗi giây. +- **Lớn** - Được chuẩn bị để index tất cả các subgraph hiện đang được sử dụng và phục vụ các yêu cầu cho lưu lượng truy cập liên quan. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| @@ -121,31 +121,31 @@ Indexers may differentiate themselves by applying advanced techniques for making | Medium | 16 | 64 | 2 | 32 | 64 | | Large | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an indexer should take? +### Một số biện pháp phòng ngừa bảo mật cơ bản mà indexer nên thực hiện là gì? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. +- **Ví Operator** - Thiết lập ví của operator là một biện pháp phòng ngừa quan trọng vì nó cho phép indexer duy trì sự tách biệt giữa các khóa kiểm soát stake của họ và những khóa kiểm soát hoạt động hàng ngày. Xem [Stake trong Giao thức](/indexing#stake-in-the-protocol) để được hướng dẫn. -- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Tường lửa** - Chỉ dịch vụ indexer cần được hiển thị công khai và cần đặc biệt chú ý đến việc khóa các cổng quản trị và quyền truy cập cơ sở dữ liệu: điểm cuối The Graph Node JSON-RPC (cổng mặc định: 8030), điểm cuối API quản lý indexer (cổng mặc định: 18000), và điểm cuối cơ sở dữ liệu Postgres (cổng mặc định: 5432) không được để lộ. -## Infrastructure +## Cơ sở hạ tầng -At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. +Tại trung tâm của cơ sở hạ tầng của indexer là Graph Node theo dõi Ethereum, trích xuất và tải dữ liệu theo định nghĩa subgraph và phục vụ nó như một [GraphQL API](/about/introduction#how-the-graph-works). Graph Node cần được kết nối với điểm cuối node Ethereum EVM và node IPFS để tìm nguồn cung cấp dữ liệu; một cơ sở dữ liệu PostgreSQL cho kho lưu trữ của nó; và các thành phần indexer tạo điều kiện cho các tương tác của nó với mạng. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. +- **Cơ sở dữ liệu PostgreSQLPostgreSQL** - Kho lưu trữ chính cho Graph Node, đây là nơi lưu trữ dữ liệu subgraph. Dịch vụ indexer và đại lý cũng sử dụng cơ sở dữ liệu để lưu trữ dữ liệu kênh trạng thái (state channel), mô hình chi phí và quy tắc indexing. -- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. +- **Điểm cuối Ethereum** - Một điểm cuối cho thấy API Ethereum JSON-RPC. Điều này có thể ở dạng một ứng dụng khách Ethereum duy nhất hoặc nó có thể là một thiết lập phức tạp hơn để tải số dư trên nhiều máy khách. Điều quan trọng cần lưu ý là các subgraph nhất định sẽ yêu cầu các khả năng cụ thể của ứng dụng khách Ethereum như chế độ lưu trữ và API truy tìm. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (phiên bản nhỏ hơn 5)** - Siêu dữ liệu triển khai subgraph được lưu trữ trên mạng IPFS. Node The Graph chủ yếu truy cập vào node IPFS trong quá trình triển khai subgraph để tìm nạp tệp kê khai (manifest) subgraph và tất cả các tệp được liên kết. Indexers mạng lưới không cần lưu trữ node IPFS của riêng họ, một node IPFS cho mạng lưới được lưu trữ tại https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Dịch vụ Indexer** - Xử lý tất cả các giao tiếp bên ngoài được yêu cầu với mạng. Chia sẻ các mô hình chi phí và trạng thái indexing, chuyển các yêu cầu truy vấn từ các cổng đến Node The Graph và quản lý các khoản thanh toán truy vấn qua các kênh trạng thái với cổng. -- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. +- **Đại lý Indexer ** - Tạo điều kiện thuận lợi cho các tương tác của Indexer trên blockchain bao gồm những việc như đăng ký trên mạng lưới, quản lý triển khai subgraph đối với Node The Graph của nó và quản lý phân bổ. Máy chủ số liệu Prometheus - Các thành phần Node The Graph và Indexer ghi các số liệu của chúng vào máy chủ số liệu. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Lưu ý: Để hỗ trợ mở rộng quy mô nhanh, bạn nên tách các mối quan tâm về truy vấn và indexing giữa các nhóm node khác nhau: node truy vấn và node index. -### Ports overview +### Tổng quan về các cổng -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. +> **Quan trọng**: Hãy cẩn thận về việc để lộ các cổng 1 cách công khai - **cổng quản lý** nên được giữ kín. Điều này bao gồm JSON-RPC Node The Graph và các điểm cuối quản lý indexer được trình bày chi tiết bên dưới. #### Graph Node @@ -157,38 +157,38 @@ Note: To support agile scaling, it is recommended that query and indexing concer | 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | | 8040 | Prometheus metrics | /metrics | --metrics-port | - | -#### Indexer Service +#### Dịch vụ Indexer | Port | Purpose | Routes | CLI Argument | Environment Variable | | ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | | 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | --metrics-port | - | -#### Indexer Agent +#### Đại lý Indexer | Port | Purpose | Routes | CLI Argument | Environment Variable | | ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | | 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Thiết lập cơ sở hạ tầng máy chủ bằng Terraform trên Google Cloud -#### Install prerequisites +#### Cài đặt điều kiện tiên quyết - Google Cloud SDK -- Kubectl command line tool +- Công cụ dòng lệnh Kubectl - Terraform -#### Create a Google Cloud Project +#### Tạo một dự án Google Cloud -- Clone or navigate to the indexer repository. +- Sao chép hoặc điều hướng đến kho lưu trữ (repository) của indexer. -- Navigate to the ./terraform directory, this is where all commands should be executed. +- Điều hướng đến thư mục ./terraform, đây là nơi tất cả các lệnh sẽ được thực thi. ```sh -cd terraform +cd địa hình ``` -- Authenticate with Google Cloud and create a new project. +- Xác thực với Google Cloud và tạo một dự án mới. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Sử dụng \[billing page\](billing page) của Google Cloud Consolde để cho phép thanh toán cho dự án mới. -- Create a Google Cloud configuration. +- Tạo một cấu hình Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Bật các API Google Cloud được yêu cầu. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Tạo một tài khoản dịch vụ. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Bật tính năng ngang hàng (peering) giữa cơ sở dữ liệu và cụm Kubernetes sẽ được tạo trong bước tiếp theo. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Tạo tệp cấu hình terraform tối thiểu (cập nhật nếu cần). ```sh indexer= @@ -260,24 +260,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### Sử dụng Terraform để tạo cơ sở hạ tầng -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +Trước khi chạy bất kỳ lệnh nào, hãy đọc qua [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) và tạo một tệp `terraform.tfvars` trong thư mục này (hoặc sửa đổi thư mục chúng ta đã tạo ở bước vừa rồi). Đối với mỗi biến mà bạn muốn ghi đè mặc định hoặc nơi bạn cần đặt giá trị, hãy nhập cài đặt vào `terraform.tfvars`. -- Run the following commands to create the infrastructure. +- Chạy các lệnh sau để tạo cơ sở hạ tầng. ```sh -# Install required plugins +# Cài đặt các Plugins được yêu cầu terraform init -# View plan for resources to be created +# Xem kế hoạch cho các tài nguyên sẽ được tạo terraform plan -# Create the resources (expect it to take up to 30 minutes) +# Tạo tài nguyên (dự kiến mất đến 30 phút) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +Tải xuống thông tin đăng nhập cho cụm mới vào `~/.kube/config` và đặt nó làm ngữ cảnh mặc định của bạn. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the indexer +#### Tạo các thành phần Kubernetes cho indexer -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- Sao chép thư mục `k8s/overlays` đến một thư mục mới `$dir,` và điều chỉnh `bases` vào trong `$dir/kustomization.yaml` để nó chỉ đến thư mục `k8s/base`. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- Đọc qua tất cả các tệp trong `$dir` và điều chỉnh bất kỳ giá trị nào như được chỉ ra trong nhận xét. -Deploy all resources with `kubectl apply -k $dir`. +Triển khai tất cả các tài nguyên với `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) là một triển khai Rust mã nguồn mở mà sự kiện tạo nguồn cho blockchain Ethereum để cập nhật một cách xác định kho dữ liệu có thể được truy vấn thông qua điểm cuối GraphQL. Các nhà phát triển sử dụng các subgraph để xác định subgraph của họ và một tập hợp các ánh xạ để chuyển đổi dữ liệu có nguồn gốc từ blockchain và Graph Node xử lý việc đồng bộ hóa toàn bộ chain, giám sát các khối mới và phân phát nó qua một điểm cuối GraphQL. -#### Getting started from source +#### Bắt đầu từ nguồn -#### Install prerequisites +#### Cài đặt điều kiện tiên quyết - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Yêu cầu bổ sung cho người dùng Ubuntu** - Để chạy Graph Node trên Ubuntu, có thể cần một số gói bổ sung. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Cài đặt -1. Start a PostgreSQL database server +1. Khởi động máy chủ cơ sở dữ liệu PostgreSQL ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Nhân bản [Graph Node](https://github.com/graphprotocol/graph-node) repo và xây dựng nguồn bằng cách chạy `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Bây giờ tất cả các phụ thuộc đã được thiết lập, hãy khởi động Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Bắt đầu sử dụng Docker -#### Prerequisites +#### Điều kiện tiên quyết -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Ethereum node** - Theo mặc định, thiết lập soạn thư docker sẽ sử dụng mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) để kết nối với node Ethereum trên máy chủ của bạn. Bạn có thể thay thế tên và url mạng này bằng cách cập nhật `docker-compose.yaml`. -#### Setup +#### Cài đặt -1. Clone Graph Node and navigate to the Docker directory: +1. Nhân bản Graph Node và điều hướng đến thư mục Docker: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: +2. Chỉ dành cho người dùng linux - Sử dụng địa chỉ IP máy chủ thay vì `host.docker.internal` trong `docker-compose.yaml` bằng cách sử dụng tập lệnh bao gồm: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. Bắt đầu một Graph Node cục bộ sẽ kết nối với điểm cuối Ethereum của bạn: ```sh docker-compose up ``` -### Indexer components +### Các thành phần của Indexer -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: +Để tham gia thành công vào mạng này, đòi hỏi sự giám sát và tương tác gần như liên tục, vì vậy chúng tôi đã xây dựng một bộ ứng dụng Typescript để tạo điều kiện cho Indexer tham gia mạng. Có ba thành phần của trình indexer: -- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. +- **Đại ly Indexer** - Đại lý giám sát mạng và cơ sở hạ tầng của chính Indexer và quản lý việc triển khai subgraph nào được lập chỉ mục và phân bổ trên chain và số lượng được phân bổ cho mỗi. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Dịch vụ Indexer** - Thành phần duy nhất cần được hiển thị bên ngoài, dịch vụ chuyển các truy vấn subgraph đến graph node, quản lý các kênh trạng thái cho các khoản thanh toán truy vấn, chia sẻ thông tin ra quyết định quan trọng cho máy khách như các cổng. -- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. +- **Indexer CLI** - Giao diện dòng lệnh để quản lý đại lý indexer. Nó cho phép indexer quản lý các mô hình chi phí và các quy tắc lập chỉ mục. -#### Getting started +#### Bắt đầu -The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! +Đại lý indexer và dịch vụ indexer nên được đặt cùng vị trí với cơ sở hạ tầng Graph Node của bạn. Có nhiều cách để thiết lập môi trường thực thi ảo cho bạn các thành phần của indexer; ở đây chúng tôi sẽ giải thích cách chạy chúng trên baremetal bằng cách sử dụng gói hoặc nguồn NPM hoặc thông qua kubernetes và docker trên Google Cloud Kubernetes Engine. Nếu các ví dụ thiết lập này không được dịch tốt sang cơ sở hạ tầng của bạn, có thể sẽ có một hướng dẫn cộng đồng để tham khảo, hãy tìm hiểu thêm tại [Discord](https://thegraph.com/discord)! Hãy nhớ [stake trong giao thứcl](/indexing#stake-in-the-protocol) trước khi bắt đầu các thành phần indexer của bạn! -#### From NPM packages +#### Từ các gói NPM ```sh npm install -g @graphprotocol/indexer-service @@ -398,17 +398,17 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### Từ nguồn ```sh -# From Repo root directory +# Từ Repo root directory yarn -# Indexer Service +# Dịch vụ Indexer cd packages/indexer-service ./bin/graph-indexer-service start ... -# Indexer agent +# Đại lý Indexer cd packages/indexer-agent ./bin/graph-indexer-service start ... @@ -418,48 +418,48 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Sử dụng docker -- Pull images from the registry +- Kéo hình ảnh từ sổ đăng ký ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +Hoặc xây dựng hình ảnh cục bộ từ nguồn ```sh -# Indexer service +# Dịch vụ Indexer docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-service \ -t indexer-service:latest \ -# Indexer agent +# Đại lý Indexer docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-agent \ -t indexer-agent:latest \ ``` -- Run the components +- Chạy các thành phần ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). +**LƯU Ý**: Sau khi khởi động vùng chứa, dịch vụ indexer sẽ có thể truy cập được tại [http://localhost:7600](http://localhost:7600) và đại lý indexer sẽ hiển thị API quản lý indexer tại [http://localhost:18000/](http://localhost:18000/). -#### Using K8s and Terraform +#### Sử dụng K8s and Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section +Xem phần [Thiết lập Cơ sở Hạ tầng Máy chủ bằng Terraform trên Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Usage +#### Sử dụng -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **LƯU Ý**: Tất cả các biến cấu hình thời gian chạy có thể được áp dụng dưới dạng tham số cho lệnh khi khởi động hoặc sử dụng các biến môi trường của định dạng `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). -#### Indexer agent +#### Đại lý Indexer ```sh graph-indexer-agent start \ @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Dịch vụ Indexer ```sh SERVER_HOST=localhost \ @@ -515,42 +515,42 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +Indexer CLI là một plugin dành cho [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) có thể truy cập trong terminal tại `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using indexer CLI +#### Quản lý Indexer bằng cách sử dụng indexer CLI -The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. +Đại lý indexer cần đầu vào từ một indexer để tự động tương tác với mạng thay mặt cho indexer. Cơ chế để xác định hành vi của đại lý indexer là **các quy tắc indexing**. Sử dụng **các quy tắc indexing** một indexer có thể áp dụng chiến lược cụ thể của họ để chọn các subgraph để lập chỉ mục và phục vụ các truy vấn. Các quy tắc được quản lý thông qua API GraphQL do đại lý phân phối và được gọi là API Quản lý Indexer. Công cụ được đề xuất để tương tác với **API Quản lý Indexer** là **Indexer CLI**, một extension cho **Graph CLI**. -#### Usage +#### Sử dụng -The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**Indexer CLI** kết nối với đại lý indexer, thường thông qua chuyển tiếp cổng (port-forwarding), vì vậy CLI không cần phải chạy trên cùng một máy chủ hoặc cụm. Để giúp bạn bắt đầu và cung cấp một số ngữ cảnh, CLI sẽ được mô tả ngắn gọn ở đây. -- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Kết nối với API quản lý indexer. Thông thường, kết nối với máy chủ được mở thông qua chuyển tiếp cổng, vì vậy CLI có thể dễ dàng vận hành từ xa. (Ví dụ: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. +- `graph indexer rules get [options] ...]` - Lấy một hoặc nhiều quy tắc indexing bằng cách sử dụng `all` như là `` để lấy tất cả các quy tắc, hoặc `global` để lấy các giá trị mặc định chung. Một đối số bổ sung`--merged` có thể được sử dụng để chỉ định rằng các quy tắc triển khai cụ thể được hợp nhất với quy tắc chung. Đây là cách chúng được áp dụng trong đại lý indexer. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - Đặt một hoặc nhiều quy tắc indexing. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Bắt đầu indexing triển khai subgraph nếu có và đặt `decisionBasis` thành `always`, để đại lý indexer sẽ luôn chọn lập chỉ mục nó. Nếu quy tắc chung được đặt thành luôn thì tất cả các subgraph có sẵn trên mạng sẽ được lập chỉ mục. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - Ngừng indexing triển khai và đặt `decisionBasis` không bao giờ, vì vậy nó sẽ bỏ qua triển khai này khi quyết định triển khai để lập chỉ mục. -- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — Đặt `thedecisionBasis` cho một triển khai thành `rules`, để đại lý indexer sẽ sử dụng các quy tắc indexing để quyết định có index việc triển khai này hay không. -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Tất cả các lệnh hiển thị quy tắc trong đầu ra có thể chọn giữa các định dạng đầu ra được hỗ trợ (`table`, `yaml`, and `json`) bằng việc sử dụng đối số `-output`. -#### Indexing rules +#### Các quy tắc indexing -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Các quy tắc Indexing có thể được áp dụng làm mặc định chung hoặc cho các triển khai subgraph cụ thể bằng cách sử dụng ID của chúng. Các trường `deployment` và `decisionBasis` là bắt buộc, trong khi tất cả các trường khác là tùy chọn. Khi quy tắc lập chỉ mục có `rules` như là `decisionBasis`, thì đại lý indexer sẽ so sánh các giá trị ngưỡng không null trên quy tắc đó với các giá trị được tìm nạp từ mạng để triển khai tương ứng. Nếu triển khai subgraph có các giá trị trên (hoặc thấp hơn) bất kỳ ngưỡng nào thì nó sẽ được chọn để index. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Ví dụ: nếu quy tắc chung có `minStake` của **5** (GRT), bất kỳ triển khai subgraph nào có hơn 5 (GRT) stake được phân bổ cho nó sẽ được index. Các quy tắc ngưỡng bao gồm `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, và `minAverageQueryFees`. -Data model: +Mô hình dữ liệu: ```graphql type IndexingRule { @@ -573,30 +573,30 @@ IndexingDecisionBasis { } ``` -#### Cost models +#### Các mô hình chi phí -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. +Mô hình chi phí cung cấp định giá động cho các truy vấn dựa trên thuộc tính thị trường và truy vấn. Dịch vụ Indexer chia sẻ mô hình chi phí với các cổng cho mỗi subgraph mà chúng dự định phản hồi các truy vấn. Đến lượt mình, các cổng sử dụng mô hình chi phí để đưa ra quyết định lựa chọn indexer cho mỗi truy vấn và để thương lượng thanh toán với những indexer đã chọn. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Ngôn ngữ Agora cung cấp một định dạng linh hoạt để khai báo các mô hình chi phí cho các truy vấn. Mô hình giá Agora là một chuỗi các câu lệnh thực thi theo thứ tự cho mỗi truy vấn cấp cao nhất trong một truy vấn GraphQL. Đối với mỗi truy vấn cấp cao nhất, câu lệnh đầu tiên phù hợp với nó xác định giá cho truy vấn đó. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Một câu lệnh bao gồm một vị từ (predicate), được sử dụng để đối sánh các truy vấn GraphQL và một biểu thức chi phí mà khi được đánh giá sẽ xuất ra chi phí ở dạng GRT thập phân. Các giá trị ở vị trí đối số được đặt tên của một truy vấn có thể được ghi lại trong vị từ và được sử dụng trong biểu thức. Các Globals có thể được đặt và thay thế cho các phần giữ chỗ trong một biểu thức. -Example cost model: +Mô hình chi phí mẫu: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Câu lệnh này ghi lại giá trị bỏ qua (skip), +# sử dụng biểu thức boolean trong vị từ để khớp với các truy vấn cụ thể sử dụng `skip` +# và một biểu thức chi phí để tính toán chi phí dựa trên giá trị `skip` và SYSTEM_LOAD global query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Mặc định này sẽ khớp với bất kỳ biểu thức GraphQL nào. +# Nó sử dụng một Global được thay thế vào biểu thức để tính toán chi phí default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Ví dụ truy vấn chi phí bằng cách sử dụng mô hình trên: | Query | Price | | ---------------------------------------------------------------------------- | ------- | @@ -604,67 +604,67 @@ Example query costing using the above model: | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### Applying the cost model +#### Áp dụng mô hình chi phí -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Các mô hình chi phí được áp dụng thông qua Indexer CLI, chuyển chúng đến API Quản lý Indexer của đại lý indexer để lưu trữ trong cơ sở dữ liệu. Sau đó, Dịch vụ Indexer sẽ nhận chúng và cung cấp các mô hình chi phí tới các cổng bất cứ khi nào họ yêu cầu. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Tương tác với mạng -### Stake in the protocol +### Stake trong giao thức -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ +Các bước đầu tiên để tham gia vào mạng với tư cách là Indexer là phê duyệt giao thức, stake tiền và (tùy chọn) thiết lập địa chỉ operator cho các tương tác giao thức hàng ngày. _ **Lưu ý**: Đối với các mục đích của các hướng dẫn này, Remix sẽ được sử dụng để tương tác hợp đồng, nhưng hãy thoải mái sử dụng công cụ bạn chọn ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), và [MyCrypto](https://www.mycrypto.com/account) là một vài công cụ được biết đến khác)._ -Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. +Khi một indexer đã stake GRT vào giao thức, [các thành phần indexer](/indexing#indexer-components) có thể được khởi động và bắt đầu tương tác của chúng với mạng. -#### Approve tokens +#### Phê duyệt các token -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Mở [Remix app](https://remix.ethereum.org/) trong một trình duyệt -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. Trong `File Explorer` tạo một tệp tên **GraphToken.abi** với [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. +3. Với `GraphToken.abi` đã chọn và mở trong trình chỉnh sửa, chuyển sang Deploy (Triển khai) và `Run Transactions` trong giao diện Remix. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. Trong môi trường (environment) chọn `Injected Web3` và trong `Account` chọn địa chỉ indexer của bạn. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Đặt địa chỉ hợp đồng GraphToken - Dán địa chỉ hợp đồng GraphToken(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) kế bên `At Address` và nhấp vào nút `At address` để áp dụng. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Gọi chức năng `approve(spender, amount)` để phê duyệt hợp đồng Staking. Điền phần `spender` bằng địa chỉ hợp đồng Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) và điền `amount` bằng số token để stake (tính bằng wei). -#### Stake tokens +#### Stake các token -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Mở [Remix app](https://remix.ethereum.org/) trong một trình duyệt -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. Trong `File Explorer` tạo một tệp tene **Staking.abi** với Staking ABI. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. +3. Với `Staking.abi` đã chọn và mở trong trình chỉnh sửa, chuyển sang `Deploy` và `Run Transactions` trong giao diện Remix. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. Trong môi trường (environment) chọn `Injected Web3` và trong `Account` chọn địa chỉ indexer của bạn. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Đặt địa chỉ hợp đồng Staking - Dán địa chỉ hợp đồng Staking (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) kế bên `At Address` và nhấp vào nút `At address` để áp dụng. -6. Call `stake()` to stake GRT in the protocol. +6. Gọi lệnh `stake()` để stake GRT vào giao thức. -7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Tùy chọn) Indexer có thể chấp thuận một địa chỉ khác làm operator cho cơ sở hạ tầng indexer của họ để tách các khóa kiểm soát tiền khỏi những khóa đang thực hiện các hành động hàng ngày như phân bổ trên các subgraph và phục vụ các truy vấn (có trả tiền). Để đặt operator, hãy gọi lệnh `setOperator()` với địa chỉ operator. -8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Tùy chọn) Để kiểm soát việc phân phối phần thưởng và thu hút delegator một cách chiến lược, indexer có thể cập nhật thông số ủy quyền của họ bằng cách cập nhật indexingRewardCut (phần triệu), queryFeeCut (phần triệu) và cooldownBlocks (số khối). Để làm như vậy, hãy gọi `setDelegationParameters()`. Ví dụ sau đặt queryFeeCut phân phối 95% hoàn phí truy vấn cho indexer và 5% cho delegator, đặt indexingRewardCutto phân phối 60% phần thưởng indexing cho indexer và 40% cho delegator và đặt `thecooldownBlocks` chu kỳ đến 500 khối. ``` setDelegationParameters(950000, 600000, 500) ``` -### The life of an allocation +### Tuổi thọ của một phân bổ -After being created by an indexer a healthy allocation goes through four states. +Sau khi được tạo bởi một indexer, một phân bổ lành mạnh sẽ trải qua bốn trạng thái. -- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. +- **Đang hoạt động** - Sau khi phân bổ được tạo trên chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) nó được xem là **đang hoạt động**. Một phần stake của chính indexer và/hoặc stake được ủy quyền được phân bổ cho việc triển khai subgraph, cho phép họ yêu cầu phần thưởng indexing và phục vụ các truy vấn cho việc triển khai subgraph đó. Đại lý indexer quản lý việc tạo phân bổ dựa trên các quy tắc của indexer. -- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). +- **Đã đóng** - Một indexer có thể tự do đóng phân bổ sau khi 1 epoch ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) hoặc đại lý indexer của họ sẽ tự động đóng phân bổ sau **maxAllocationEpochs** (hiện tại 28 ngày). Khi kết thúc phân bổ với bằng chứng hợp lệ về proof of indexing (POI), phần thưởng indexing của họ sẽ được phân phối cho indexer và những delegator của nó (xem "phần thưởng được phân phối như thế nào?" Bên dưới để tìm hiểu thêm). -- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. +- **Hoàn thiện** - Sau khi phân bổ đã bị đóng, sẽ có một khoảng thời gian tranh chấp mà sau đó phân bổ được xem xét là **hoàn thiện** và nó có sẵn các khoản hoàn lại phí truy vấn khả dụng để được yêu cầu (claim()). Đại lý indexer giám sát mạng để phát hiện các phân bổ **hoàn thiện** yêu cầu chúng nếu chúng vượt quá ngưỡng có thể định cấu hình (và tùy chọn), **—-allocation-claim-threshold**. -- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. +- **Đã yêu cầu** - Trạng thái cuối cùng của một phân bổ; nó đã chạy quá trình của nó dưới dạng phân bổ đang hoạt động, tất cả các phần thưởng đủ điều kiện đã được phân phối và các khoản bồi hoàn phí truy vấn của nó đã được yêu cầu. From fe9bb9ffb3661e224920adfe331201353487f4b3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:26 -0500 Subject: [PATCH 212/241] New translations global.json (Spanish) --- pages/es/global.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/es/global.json b/pages/es/global.json index d829483dff23..7e187e00b0c6 100644 --- a/pages/es/global.json +++ b/pages/es/global.json @@ -1,8 +1,8 @@ { "language": "Language", - "aboutTheGraph": "About The Graph", + "aboutTheGraph": "Acerca de The Graph", "developer": "Desarrollador", - "supportedNetworks": "Redes admitidas", + "supportedNetworks": "Redes compatibles", "collapse": "Collapse", "expand": "Expand", "previous": "Previous", From bf2ca09223dcba397462f210bc1287204385dfc1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:27 -0500 Subject: [PATCH 213/241] New translations global.json (Arabic) --- pages/ar/global.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ar/global.json b/pages/ar/global.json index d7e8be465fc7..3cf6737c97c2 100644 --- a/pages/ar/global.json +++ b/pages/ar/global.json @@ -1,7 +1,7 @@ { "language": "Language", - "aboutTheGraph": "About The Graph", - "developer": "المطور", + "aboutTheGraph": "حول The Graph", + "developer": "مطور", "supportedNetworks": "الشبكات المدعومة", "collapse": "Collapse", "expand": "Expand", From dc177e6156bdfc83665ed929d7d7589d1b1e6dc5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:28 -0500 Subject: [PATCH 214/241] New translations global.json (Japanese) --- pages/ja/global.json | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/ja/global.json b/pages/ja/global.json index ebb2edd830b6..d755ec739e6c 100644 --- a/pages/ja/global.json +++ b/pages/ja/global.json @@ -1,8 +1,8 @@ { "language": "Language", - "aboutTheGraph": "About The Graph", - "developer": "ディベロッパー", - "supportedNetworks": "Supported Networks", + "aboutTheGraph": "The Graphについて", + "developer": "デベロッパー", + "supportedNetworks": "サポートされているネットワーク", "collapse": "Collapse", "expand": "Expand", "previous": "Previous", From 9ccc401f81d60cab2be833062eb5105ee3c61b87 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:29 -0500 Subject: [PATCH 215/241] New translations global.json (Chinese Simplified) --- pages/zh/global.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/zh/global.json b/pages/zh/global.json index cf259d6a0432..fa27403b545d 100644 --- a/pages/zh/global.json +++ b/pages/zh/global.json @@ -1,7 +1,7 @@ { "language": "Language", - "aboutTheGraph": "About The Graph", - "developer": "开发者", + "aboutTheGraph": "关于 The Graph", + "developer": "开发商", "supportedNetworks": "支持的网络", "collapse": "Collapse", "expand": "Expand", From a3f2ddef0896004b0bd1d8493f33f6f0e4bf5759 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:30 -0500 Subject: [PATCH 216/241] New translations indexing.mdx (Arabic) --- pages/ar/indexing.mdx | 362 +++++++++++++++++++++--------------------- 1 file changed, 181 insertions(+), 181 deletions(-) diff --git a/pages/ar/indexing.mdx b/pages/ar/indexing.mdx index 0b1896db2749..e77c0cb33880 100644 --- a/pages/ar/indexing.mdx +++ b/pages/ar/indexing.mdx @@ -4,47 +4,47 @@ title: فهرسة (indexing) import { Difficulty } from '@/components' -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. +المفهرسون ( Indexers) هم مشغلي العقد (node) في شبكة TheGraph ويقومون ب staking لتوكن (GRT) من أجل توفير خدمات الفهرسة ( indexing) والاستعلام. المفهرسون(Indexers) يحصلون على رسوم الاستعلام ومكافآت الفهرسة وذلك مقابل خدماتهم. وأيضا يكسبون من مجموعة الخصومات (Rebate Pool) والتي تتم مشاركتها مع جميع المساهمين في الشبكة بما يتناسب مع عملهم ، وفقا ل Cobbs-Douglas Rebate Function. -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. +يخضع GRT المخزن في البروتوكول لفترة إذابة thawing period وقد يتم شطبه إذا كان المفهرسون ضارون ويقدمون بيانات غير صحيحة للتطبيقات أو إذا قاموا بالفهرسة بشكل غير صحيح. المفهرسون يتم تفويضهم من قبل المفوضين وذلك للمساهمه في الشبكة. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +يختار المفهرسون subgraphs للقيام بالفهرسة بناء على إشارة تنسيق subgraphs ، حيث أن المنسقون يقومون ب staking ل GRT وذلك للإشارة ل Subgraphs عالية الجودة. يمكن أيضا للعملاء (مثل التطبيقات) تعيين بارامترات حيث يقوم المفهرسون بمعالجة الاستعلامات ل Subgraphs وتسعير رسوم الاستعلام. -## FAQ +## الأسئلة الشائعة -### What is the minimum stake required to be an indexer on the network? +### ما هو الحد الأدنى لتكون مفهرسا على الشبكة؟ -The minimum stake for an indexer is currently set to 100K GRT. +لتكون مفهرسا فإن الحد الأدنى ل Staking هو 100K GRT. -### What are the revenue streams for an indexer? +### ما هي مصادر الدخل للمفهرس؟ -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +** خصومات رسوم الاستعلام Query fee rebates ** - هي مدفوعات مقابل خدمة الاستعلامات على الشبكة. هذه الأجور تكون بواسطة قناة بين المفهرس والبوابة (gateway). كل طلب استعلام من بوابة يحتوي على دفع ،والرد عليه دليل على صحة نتيجة الاستعلام. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. +** مكافآت الفهرسة Indexing rewards** - يتم إنشاؤها من خلال تضخم سنوي للبروتوكول بنسبة 3٪ ، ويتم توزيع مكافآت الفهرسة على المفهرسين الذين يقومون بفهرسة ال subgraphs للشبكة. -### How are rewards distributed? +### كيف توزع المكافآت؟ -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +تأتي مكافآت الفهرسة من تضخم البروتوكول والذي تم تعيينه بنسبة 3٪ سنويا. يتم توزيعها عبر subgraphs بناءً على نسبة جميع إشارات التنسيق في كل منها ، ثم يتم توزيعها بالتناسب على المفهرسين بناءً على حصصهم المخصصة على هذا ال subgraph. \*\* يجب إغلاق المخصصة بإثبات صالح للفهرسة (POI) والذي يفي بالمعايير التي حددها ميثاق التحكيم حتى يكون مؤهلاً للحصول على المكافآت. -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). +تم إنشاء العديد من الأدوات من قبل المجتمع لحساب المكافآت ؛ ستجد مجموعة منها منظمة في دليل المجتمع. يمكنك أيضا أن تجد قائمة محدثة من الأدوات في قناة #delegators و #indexers على Discord. -### What is a proof of indexing (POI)? +### ما هو إثبات الفهرسة (POI)؟ -POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +تُستخدم POIs في الشبكة وذلك للتحقق من أن المفهرس يقوم بفهرسة ال subgraphs والتي قد تم تخصيصها. POI للكتلة الأولى من الفترة الحالية تسلم عند إغلاق المخصصة لذلك التخصيص ليكون مؤهلاً لفهرسة المكافآت. كتلة ال POI هي عبارة عن ملخص لجميع معاملات المخزن لنشر subgraph محدد حتى تضمين تلك الكتلة. -### When are indexing rewards distributed? +### متى يتم توزيع مكافآت الفهرسة؟ -Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +المخصصات تقوم بتجميع المكافآت باستمرار أثناء فاعليتها. يتم جمع المكافآت من قبل المفهرسين وتوزيعها كلما تم إغلاق مخصصاتهم. يحدث هذا إما يدويا عندما يريد المفهرس إغلاقها بالقوة ، أو بعد 28 فترة يمكن للمفوض إغلاق التخصيص للمفهرس ، لكن هذا لا ينتج عنه أي مكافآت. 28 فترة هي أقصى مدة للتخصيص (حاليا، تستمر فترة واحدة لمدة 24 ساعة تقريبًا). -### Can pending indexer rewards be monitored? +### هل يمكن مراقبة مكافآت المفهرس المعلقة؟ -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. +تشمل العديد من لوحات المعلومات dashboards التي أنشأها المجتمع على قيم المكافآت المعلقة ويمكن التحقق منها بسهولة يدويا باتباع الخطوات التالية: -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +استخدم Etherscan لاستدعاء `getRewards()`: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. استعلم عن [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) للحصول على IDs لجميع المخصصات النشطة: ```graphql query indexerAllocations { @@ -60,109 +60,109 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +استخدم Etherscan لاستدعاء `()getRewards`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- انتقل إلى [ واجهة Etherscan لعقد المكافآت Rewards contract ](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* To call `getRewards()`: - - Expand the **10. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +* لاستدعاء ()getRewards: + - قم بتوسيع ال\*\* 10. قائمة getRewards المنسدلة. + - انقر على زر **Query استعلام**. + - الاعتراضات لديها **ثلاث** نتائج محتملة ، وكذلك إيداع ال Fishermen. -### What are disputes and where can I view them? +### ما هي الاعتراضات disputes وأين يمكنني عرضها؟ -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +يمكن الاعتراض على استعلامات المفهرس وتخصيصاته على The Graph أثناء فترة الاعتراض dispute. تختلف فترة الاعتراض حسب نوع الاعتراض. تحتوي الاستعلامات / الشهادات Queries/attestations على نافذة اعتراض لـ 7 فترات ، في حين أن المخصصات لها 56 فترة. بعد مرور هذه الفترات ، لا يمكن فتح اعتراضات ضد أي من المخصصات أو الاستعلامات. عند فتح الاعتراض ، يجب على الصيادين Fishermen إيداع على الأقل 10000 GRT ، والتي سيتم حجزها حتى يتم الانتهاء من الاعتراض وتقديم حل. الصيادون Fisherman هم المشاركون في الشبكة الذين يفتحون الاعتراضات. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +يمكنك عرض الاعتراضات من واجهة المستخدم في صفحة ملف تعريف المفهرس وذلك من علامة التبويب `Disputes`. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- إذا تم رفض الاعتراض، فسيتم حرق GRT المودعة من قبل ال Fishermen ، ولن يتم شطب المفهرس المعترض عليه. +- إذا تمت تسوية الاعتراض بالتعادل، فسيتم إرجاع وديعة ال Fishermen ، ولن يتم شطب المفهرس المعترض عليه. +- إذا تم قبول الاعتراض، فسيتم إرجاع GRT التي أودعها الFishermen ، وسيتم شطب المفهرس المعترض عليه وسيكسب Fishermen ال 50٪ من GRT المشطوبة. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +يمكن عرض الاعتراضات في واجهة المستخدم في بروفايل المفهرس ضمن علامة التبويب `Disputes`. -### What are query fee rebates and when are they distributed? +### ما هي خصومات رسوم الاستعلام ومتى يتم توزيعها؟ -Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. +يتم تحصيل رسوم الاستعلام بواسطة البوابة gateway وذلك عندما يتم إغلاق الحصة وتجميعها في خصومات رسوم الاستعلام في ال subgraph. تم تصميم مجموعة الخصومات rebate pool لتشجيع المفهرسين على تخصيص حصة تقريبية لمقدار رسوم الاستعلام التي يكسبونها للشبكة. يتم حساب جزء رسوم الاستعلام في المجموعة التي تم تخصيصها لمفهرس معين وذلك باستخدام دالة Cobbs-Douglas Production ؛ المبلغ الموزع لكل مفهرس يعتمد على مساهماتهم في المجموعة pool وتخصيص حصتهم على ال subgraph. -Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. +بمجرد إغلاق التخصيص ومرور فترة الاعتراض، تكون الخصومات متاحة للمطالبة بها من قبل المفهرس. عند المطالبة ، يتم توزيع خصومات رسوم الاستعلام للمفهرس ومفوضيه بناء على اقتطاع رسوم الاستعلام query fee cut ونسب أسهم التفويض. -### What is query fee cut and indexing reward cut? +### ما المقصود بqueryFeeCut وindexingRewardCut؟ -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. +قيم ال `queryFeeCut` و `indexingRewardCut` هي بارامترات التفويض التي قد يقوم المفهرس بتعيينها مع cooldownBlocks للتحكم في توزيع GRT بين المفهرس ومفوضيه. انظر لآخر الخطوات في [ ال staking في البروتوكول](/indexing#stake-in-the-protocol) للحصول على إرشادات حول تعيين بارامترات التفويض. -- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. +- **queryFeeCut** هي النسبة المئوية لخصومات رسوم الاستعلام المتراكمة على subgraph والتي سيتم توزيعها على المفهرس. إذا تم التعيين على 95٪ ، فسيحصل المفهرس على 95٪ من مجموعة خصم رسوم الاستعلام عند المطالبة بالمخصصة و 5٪ إلى المفوضين. -- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. +- **indexingRewardCut** هي النسبة المئوية لمكافآت الفهرسة المتراكمة على subgraph والتي سيتم توزيعها على المفهرس. إذا تم تعيين 95٪ ، فسيحصل المفهرس على 95٪ من مجموع مكافآت الفهرسة عند إغلاق المخصصة وسيقوم المفوضون بتقاسم الـ 5٪ الأخرى. -### How do indexers know which subgraphs to index? +### كيف يعرف المفهرسون أي subgraphs عليهم فهرستها؟ -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +من خلال تطبيق تقنيات متقدمة لاتخاذ قرارات فهرسة ال subgraph ، وسنناقش العديد من المقاييس الرئيسية المستخدمة لتقييم ال subgraphs في الشبكة: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **إشارة التنسيق Curation signal** ـ تعد نسبة إشارة تنسيق الشبكة على subgraph معين مؤشرا جيدا على الاهتمام بهذا ال subgraph، خاصة أثناء المراحل الأولى عندما يزداد حجم الاستعلام. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **مجموعة رسوم الاستعلام Query fees collected** ـ تعد البيانات التاريخية لحجم مجموعة رسوم الاستعلام ل subgraph معين مؤشرا جيدا للطلب المستقبلي. -- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** ـ مراقبة سلوك المفهرسين أو النظر إلى نسب إجمالي الحصة المخصصة ل subgraphs معين تسمح للمفهرس بمراقبة جانب العرض لاستعلامات الsubgraph لتحديد ال subgraphs الموثوقة أو subgraphs التي قد تظهر الحاجة إلى مزيد من العرض. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **ال Subgraphs التي بدون مكافآت فهرسة** ـ بعض الsubgraphs لا تنتج مكافآت الفهرسة بشكل أساسي لأنها تستخدم ميزات غير مدعومة مثل IPFS أو لأنها تستعلم عن شبكة أخرى خارج الشبكة الرئيسية mainnet. سترى رسالة على ال subgraph إذا لا تنتج مكافآت فهرسة. -### What are the hardware requirements? +### ما هي المتطلبات للهاردوير؟ -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **صغيرة**ـ يكفي لبدء فهرسة العديد من ال subgraphs، من المحتمل أن تحتاج إلى توسيع. +- ** قياسية ** - هو الإعداد الافتراضي ، ويتم استخدامه في مثال بيانات نشر k8s / terraform. +- **متوسطة** - مؤشر انتاج ​​يدعم 100 subgraphs و 200-500 طلب في الثانية. +- **كبيرة** - مُعدة لفهرسة جميع ال subgraphs المستخدمة حاليا وأيضا لخدمة طلبات حركة مرور البيانات ذات الصلة. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | (CPUs) | (memory in GB) | (disk in TBs) | (CPUs) | (memory in GB) | +| ----- |:------:|:--------------:|:-------------:|:------:|:--------------:| +| صغير | 4 | 8 | 1 | 4 | 16 | +| قياسي | 8 | 30 | 1 | 12 | 48 | +| متوسط | 16 | 64 | 2 | 32 | 64 | +| كبير | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an indexer should take? +### ما هي بعض احتياطات الأمان الأساسية التي يجب على المفهرس اتخاذها؟ -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. +- **محفظة المشغلOperator wallet**- يعد إعداد محفظة المشغل إجراء احترازيًا مهمًا لأنه يسمح للمفهرس بالحفاظ على الفصل بين مفاتيحه التي تتحكم في ال stake وتلك التي تتحكم في العمليات اليومية. انظر [الحصة Stake في البروتوكول](/indexing#stake-in-the-protocol) للحصول على تعليمات. -- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **الجدار الناريFirewall**- فقط خدمة المفهرس تحتاج إلى كشفها للعامة ويجب تأمين منافذ الإدارة والوصول إلى قاعدة البيانات: the Graph Node JSON-RPC endpoint (المنفذ الافتراضي: 8030) ، API endpoint لإدارة المفهرس (المنفذ الافتراضي: 18000) ، ويجب عدم كشف نقطة نهاية قاعدة بيانات Postgres (المنفذ الافتراضي: 5432). -## Infrastructure +## البنية الأساسية -At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. +في البنية الأساسية للمفهرس ، توجد فيها Graph Node والتي تراقب Ethereum وتستخرج وتحمل البيانات لكل تعريف subgraph وتقدمها باعتبارها [GraphQL API](/about/introduction#how-the-graph-works). يجب توصيل Graph Node ب EVM node endpoints و IPFS node للحصول على البيانات و قاعدة بيانات PostgreSQL ومكونات المفهرس indexer components التي تسهل تفاعلها مع الشبكة. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. +- **قاعدة بيانات PostgreSQL**-هو المخزن الرئيسي لGraph Node ، وفيه يتم تخزين بيانات ال subgraph. خدمة المفهرس والوكيل تستخدم أيضًا قاعدة البيانات لتخزين بيانات قناة الحالة ونماذج التكلفة وقواعد الفهرسة. -- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. +- ** Ethereum endpoint ** - هي نقطة نهاية تعرض Ethereum JSON-RPC API. قد يأخذ ذلك نموذج عميل Ethereum واحدا أو قد يكون ذو إعداد أكثر تعقيدا والذي يقوم بتحميل أرصدة عبر عدة نماذج. من المهم أن تدرك أن بعض ال subgraphs تتطلب قدرات معينة لعميل Ethereum مثل الأرشفة وتتبع API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **(الإصدار أقل من 5) IPFS node** بيانات ال Subgraph تخزن على شبكة IPFS. يمكن لGraph Node بشكل أساسي الوصول إلى IPFS node أثناء نشر الsubgraph لجلب الsubgraph manifest وجميع الملفات المرتبطة. لا يحتاج مفهرسو الشبكة إلى استضافة IPFS node الخاصة بهم ، حيث يتم استضافة IPFS node للشبكة على https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **خدمة المفهرس Indexer service**- يتعامل مع جميع الاتصالات الخارجية المطلوبة مع الشبكة. ويشارك نماذج التكلفة وحالات الفهرسة ، ويمرر طلبات الاستعلام من البوابات gateways إلى Graph Node ، ويدير مدفوعات الاستعلام عبر قنوات الحالة مع البوابة. -- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. +- **Indexer agent**- يسهل تفاعلات المفهرسين على السلسلة بما في ذلك التسجيل في الشبكة ، وإدارة عمليات نشر الsubgraph إلى Graph Node/s الخاصة بها ، وإدارة المخصصات. سيرفر مقاييس Prometheus - مكونات ال Graph Node والمفهرس تسجل قياساتها على سيرفر المقاييس. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +ملاحظة: لدعم القياس السريع ، يستحسن فصل الاستعلام والفهرسة بين مجموعات مختلفة من العقد Nodes: عقد الاستعلام وعقد الفهرس. -### Ports overview +### نظرة عامة على المنافذ Ports -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. +> **مهم** كن حذرًا بشأن كشف المنافذ للعامة - **منافذ الإدارة** يجب أن تبقى مغلقة. يتضمن ذلك Graph Node JSON-RPC ونقاط نهاية endpoints إدارة المفهرس التالية. #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | -#### Indexer Service +#### خدمة المفهرس -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -170,25 +170,25 @@ Note: To support agile scaling, it is recommended that query and indexing concer | ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | | 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### قم بإعداد البنية الأساسية للسيرفر باستخدام Terraform على Google Cloud -#### Install prerequisites +#### متطلبات التثبيت - Google Cloud SDK - Kubectl command line tool - Terraform -#### Create a Google Cloud Project +#### أنشئ مشروع Google Cloud -- Clone or navigate to the indexer repository. +- استنسخ أو انتقل إلى مستودع المفهرس. -- Navigate to the ./terraform directory, this is where all commands should be executed. +- انتقل إلى الدليل ./terraform ، حيث يجب تنفيذ جميع الأوامر. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- قم بالتوثيق بواسطة Google Cloud وأنشئ مشروع جديد. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- استخدم [صفحة الفوترة] في Google Cloud Console لتمكين الفوترة للمشروع الجديد. -- Create a Google Cloud configuration. +- قم بإنشاء Google Cloud configuration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- قم بتفعيل Google Cloud APIs المطلوبة. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- قم بإنشاء حساب الخدمة service account. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- قم بتفعيل ال peering بين قاعدة البيانات ومجموعة Kubernetes التي سيتم إنشاؤها في الخطوة التالية. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- قم بإنشاء الحد الأدنى من ملف التهيئة ل terraform (التحديث حسب الحاجة). ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### استخدم Terraform لإنشاء البنية الأساسية -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +قبل تشغيل أي من الأوامر ، اقرأ [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) وأنشئ ملف `terraform.tfvars` في هذا الدليل (أو قم بتعديل الدليل الذي أنشأناه في الخطوة الأخيرة). أدخل الإعداد في `terraform.tfvars` لكل متغير تريد أن يتجاهل الافتراضي ، أو تريد تعيين قيمة إليه. -- Run the following commands to create the infrastructure. +- قم بتشغيل الأوامر التالية لإنشاء البنية الأساسية. ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +انشر جميع المصادر باستخدام `kubectl application -k $dir`. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the indexer +#### إنشاء مكونات ال Kubernetes للمفهرس -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- انسخ الدليل `k8s / Overays` إلى دليل جديد `$dir,` واضبط إدخال `القواعد` في `$dir/ kustomization.yaml` بحيث يشير إلى الدليل `k8s / base`. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- اقرأ جميع الملفات الموجودة في `$dir` واضبط القيم كما هو موضح في التعليقات. Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[ Graph Node ](https://github.com/graphprotocol/graph-node) هو تطبيق مفتوح المصدر Rust ومصدره Ethereum blockchain لتحديث البيانات والذي يمكن الاستعلام عنها عبر GraphQL endpoint. يستخدم المطورون ال subgraphs لتحديد مخططهم ، ويستخدمون مجموعة من الرسوم لتحويل البيانات التي يتم الحصول عليها من blockchain و the Graph Node والتي تقوم بمعالجة مزامنة السلسلة بأكملها ، ومراقبة الكتل الجديدة ، وتقديمها عبر GraphQL endpoint. -#### Getting started from source +#### ابدأ من المصدر -#### Install prerequisites +#### متطلبات التثبيت - **Rust** @@ -307,7 +307,7 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **متطلبات إضافية لمستخدمي Ubuntu **- لتشغيل Graph Node على Ubuntu ، قد تكون هناك حاجة إلى بعض الحزم الإضافية. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config @@ -315,7 +315,7 @@ sudo apt-get install -y clang libpg-dev libssl-dev pkg-config #### Setup -1. Start a PostgreSQL database server +1. شغل سيرفر قاعدة بيانات PostgreSQL ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. استنسخ [ Graph Node ](https://github.com/graphprotocol/graph-node) وابني المصدر عن طريق تشغيل `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. ابدأ Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### الشروع في استخدام Docker -#### Prerequisites +#### المتطلبات الأساسية -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Ethereum node** - افتراضيا،إعداد ال docker سيستخدم mainnet [http://host.docker.internal:8545](http://host.docker.internal:8545) للاتصال بEthereum node على جهازك المضيف. يمكنك استبدال اسم الشبكة وعنوان url بتحديث `docker-compose.yaml`. #### Setup -1. Clone Graph Node and navigate to the Docker directory: +1. انسخ Graph Node وانتقل إلى دليل Docker: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: +2. لمستخدمي نظام Linux فقط - استخدم عنوان IP للمضيف بدلاً من `host.docker.internal` في `docker-compose.yaml` باستخدام البرنامج النصي المضمن: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. ابدأ Graph Node محلية والتي ستتصل ب Ethereum endpoint الخاصة بك: ```sh docker-compose up ``` -### Indexer components +### مكونات المفهرس Indexer components -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: +المشاركة الناجحة في الشبكة تتطلب مراقبة وتفاعلا مستمرين تقريبا ، لذلك قمنا ببناء مجموعة من تطبيقات Typescript لتسهيل مشاركة شبكة المفهرسين. هناك ثلاثة مكونات للمفهرس: -- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. +- **Indexer agent** - يراقب الشبكة والبنية الأساسية الخاصة بالمفهرس ويدير عمليات نشر subgraph والتي تتم فهرستها وتوزيعها على السلسلة ومقدار ما يتم تخصيصه لكل منها. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - المكون الوحيد الذي يجب الكشف عنه للعامة، حيث تمر الخدمة على استعلامات subgraph إلى graph node ، وتدير قنوات الحالة state channels لمدفوعات الاستعلام ، وتشارك معلومات مهمة بشأن اتخاذ القرار للعملاء مثل البوابات gateways. -- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. +- ** فهرس CLI ** - واجهة سطر الأوامر لإدارة وكيل المفهرس indexer agent. يسمح للمفهرسين بإدارة نماذج التكلفة وقواعد الفهرسة. -#### Getting started +#### ابدأ -The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! +يجب أن يتم وضع وكيل المفهرس indexer agent وخدمة المفهرس indexer service في نفس الموقع مع البنية الأساسية ل Graph Node الخاصة بك. هناك العديد من الطرق لإعداد بيئات التشغيل الافتراضية لمكونات المفهرس ؛ سنشرح هنا كيفية تشغيلها على baremetal باستخدام حزم NPM أو المصدر ، أو عبر kubernetes و docker على Google Cloud Kubernetes Engine. إذا لم تُترجم أمثلة الإعداد هذه بشكل جيد إلى بنيتك الأساسية ، فمن المحتمل أن يكون هناك دليل مجتمعي للرجوع إليه ، تفضل بزيارة [ Discord ](https://thegraph.com/discord)! تذكر أن [ تشارك في البروتوكول ](/indexing#stake-in-the-protocol) قبل البدء في تشغيل مكونات المفهرس! -#### From NPM packages +#### من حزم NPM ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### من المصدر ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### استخدام docker -- Pull images from the registry +- اسحب الصور من السجل ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +**ملاحظة**: بعد بدء ال containers ، يجب أن تكون خدمة المفهرس متاحة على [http: // localhost: 7600 ](http://localhost:7600) ويجب على وكيل المفهرس عرض API إدارة المفهرس على [ http: // localhost: 18000 / ](http://localhost:18000/). ```sh # Indexer service @@ -442,22 +442,22 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- قم بتشغيل المكونات ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). +انظر قسم [ إعداد البنية الأساسية للسيرفر باستخدام Terraform على Google Cloud ](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Using K8s and Terraform +#### استخدام K8s و Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section +The Indexer CLI هو مكون إضافي لـ [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) ويمكن الوصول إليه في النهاية الطرفية عند `graph indexer`. -#### Usage +#### الاستخدام -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **ملاحظة**: جميع متغيرات الإعدادات الخاصة بوقت التشغيل يمكن تطبيقها إما كبارامترات للأمر عند بدء التشغيل أو باستخدام متغيرات البيئة بالتنسيق `COMPONENT_NAME_VARIABLE_NAME` (على سبيل المثال `INDEXER_AGENT_ETHEREUM`). #### Indexer agent @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### خدمة المفهرس Indexer service ```sh SERVER_HOST=localhost \ @@ -522,35 +522,35 @@ graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using indexer CLI +#### إدارة المفهرس باستخدام مفهرس CLI -The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. +يحتاج وكيل المفهرس indexer agent إلى مدخلات من المفهرس من أجل التفاعل بشكل مستقل مع الشبكة نيابة عن المفهرس. **قواعد الفهرسة** تقوم بتحديد سلوك وكيل المفهرس indexer agent. باستخدام **قواعد الفهرسة** يمكن للمفهرس تطبيق إستراتيجيته المحددة لانتقاء ال subgraphs للفهرسة وعرض الاستعلامات الخاصة بها. تتم إدارة القواعد عبر GraphQL API التي يقدمها الوكيل وتُعرف باسم API إدارة المفهرس. الأداة المقترحة للتفاعل مع ** API إدارة المفهرس ** هي ** Indexer CLI ** ، وهو امتداد لـ **Graph CLI**. -#### Usage +#### الاستخدام -The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +يتصل ** Indexer CLI ** بوكيل المفهرس indexer agent ، عادةً من خلال port-forwarding ، لذلك لا يلزم تشغيل CLI على نفس السيرفر أو المجموعة. ولمساعدتك على البدء سيتم وصف CLI بإيجاز هنا. -- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - قم بالاتصال بAPI إدارة المفهرس. عادةً ما يتم فتح الاتصال بالسيرفر عبر إعادة توجيه المنفذ port forwarding ، لذلك يمكن تشغيل CLI بسهولة عن بُعد. (مثل: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. +- `graph indexer rules get [options] ...]` - احصل على قاعدة أو أكثر من قواعد الفهرسة باستخدام `all` مثل `` للحصول على جميع القواعد, أو `global` للحصول على الافتراضات العالمية. يمكن استخدام argument إضافية `--merged` لتحديد قواعد النشر المحددة المدمجة مع القاعدة العامة. هذه هي الطريقة التي يتم تطبيقها في indexer agent. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - قم بتعيين قاعدة أو أكثر من قواعد الفهرسة. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - ابدأ فهرسة ال subgraph إذا كان متاحًا وقم بتعيين `decisionBasis` إلى `always`, لذلك دائما سيختار وكيل المفهرس فهرسته. إذا تم تعيين القاعدة العامة على دائما always ، فسيتم فهرسة جميع ال subgraphs المتاحة على الشبكة. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - توقف عن فهرسة النشر deployment وقم بتعيين ملف `decisionBasis` إلىnever أبدًا ، لذلك سيتم تخطي هذا النشر عند اتخاذ قرار بشأن عمليات النشر للفهرسة. -- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — ضع `thedecisionBasis` للنشر deployment ل `rules`, بحيث يستخدم وكيل المفهرس قواعد الفهرسة ليقرر ما إذا كان سيفهرس هذا النشر أم لا. -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +جميع الأوامر التي تعرض القواعد في الخرج output يمكنها الاختيار بين تنسيقات الإخراج المدعومة (`table`, `yaml`, `json`) باستخدام `-output` argument. -#### Indexing rules +#### قواعد الفهرسة -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +يمكن تطبيق قواعد الفهرسة إما كإعدادات افتراضية عامة أو لعمليات نشر subgraph محددة باستخدام معرفاتها IDs. يعد الحقلان `deployment` و `decisionBasis` إلزاميًا ، بينما تعد جميع الحقول الأخرى اختيارية. عندما تحتوي قاعدة الفهرسة على `rules` باعتبارها `decisionBasis` ، فإن وكيل المفهرس indexer agent سيقارن قيم العتبة غير الفارغة في تلك القاعدة بالقيم التي تم جلبها من الشبكة. إذا كان نشر ال subgraph يحتوي على قيم أعلى (أو أقل) من أي من العتبات ، فسيتم اختياره للفهرسة. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +على سبيل المثال ، إذا كانت القاعدة العامة لديها`minStake` من ** 5 ** (GRT) ، فأي نشر subgraph به أكثر من 5 (GRT) من الحصة المخصصة ستتم فهرستها. قواعد العتبة تتضمن `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, `minAverageQueryFees`. -Data model: +نموذج البيانات Data model: ```graphql type IndexingRule { @@ -573,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### Cost models +#### نماذج التكلفة Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. +نماذج التكلفة تقوم بالتسعير بشكل ديناميكي للاستعلامات بناءً على خصائص السوق والاستعلام. خدمة المفهرس Indexer Service تشارك نموذج التكلفة مع البوابات gateways لكل subgraph للذين يريدون الرد على الاستفسارات. هذه البوابات تستخدم نموذج التكلفة لاتخاذ قرارات اختيار المفهرس لكل استعلام وللتفاوض بشأن الدفع مع المفهرسين المختارين. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +توفر لغة Agora تنسيقا مرنا للإعلان عن نماذج التكلفة للاستعلامات. نموذج سعر Agora هو سلسلة من العبارات التي يتم تنفيذها بالترتيب لكل استعلام عالي المستوى في GraphQL. بالنسبة إلى كل استعلام عالي المستوى top-level ، فإن العبارة الأولى التي تتطابق معه تحدد سعر هذا الاستعلام. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +تتكون العبارة من المسند predicate ، والذي يستخدم لمطابقة استعلامات GraphQL وتعبير التكلفة والتي عند تقييم النواتج تكون التكلفة ب GRT عشري. قيم الاستعلام الموجودة في ال argument ،قد يتم تسجيلها في المسند predicate واستخدامها في التعبير expression. يمكن أيضًا تعيين Globals وتعويضه في التعبير expression. -Example cost model: +مثال لتكلفة الاستعلام باستخدام النموذج أعلاه: ``` # This statement captures the skip value, @@ -596,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +مثال على نموذج التكلفة: -| Query | Price | +| الاستعلام | السعر | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### Applying the cost model +#### تطبيق نموذج التكلفة -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +يتم تطبيق نماذج التكلفة عبر Indexer CLI ، والذي يقوم بتمريرها إلى وكيل المفهرس عبر API إدارة المفهرس للتخزين في قاعدة البيانات. بعد ذلك ستقوم خدمة المفهرس Indexer Service باستلامها وتقديم نماذج التكلفة للبوابات كلما طلبوا ذلك. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## التفاعل مع الشبكة ### Stake in the protocol -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ +الخطوات الأولى للمشاركة في الشبكة كمفهرس هي الموافقة على البروتوكول وصناديق الأسهم، و (اختياريا) إعداد عنوان المشغل لتفاعلات البروتوكول اليومية. _ ** ملاحظة **: لأغراض الإرشادات ، سيتم استخدام Remix للتفاعل مع العقد ، ولكن لا تتردد في استخدام الأداة التي تختارها (\[OneClickDapp \](https: // oneclickdapp.com/) و [ ABItopic ](https://abitopic.io/) و [ MyCrypto ](https://www.mycrypto.com/account) وهذه بعض الأدوات المعروفة)._ -Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. +بعد أن تم إنشاؤه بواسطة المفهرس ، يمر التخصيص السليم عبر أربع حالات. -#### Approve tokens +#### اعتماد التوكن tokens -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. افتح [ تطبيق Remix ](https://remix.ethereum.org/) على المتصفح -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. في `File Explorer` أنشئ ملفا باسم ** GraphToken.abi ** باستخدام \[token ABI \](https://raw.githubusercontent.com/graphprotocol /contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. +3. مع تحديد `GraphToken.abi` وفتحه في المحرر ، قم بالتبديل إلى Deploy و `Run Transactions` في واجهة Remix. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. تحت البيئة environment ، حدد `Injected Web3` وتحت `Account` حدد عنوان المفهرس. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. قم بتعيين عنوان GraphToken - الصق العنوان (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) بجوار `At Address` وانقر على الزر `At address` لتطبيق ذلك. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. استدعي دالة `approve(spender, amount)` للموافقة على عقد Staking. املأ `spender` بعنوان عقد Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) واملأ `amount` بالتوكن المراد عمل staking لها (في wei). #### Stake tokens -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. افتح [ تطبيق Remix ](https://remix.ethereum.org/) على المتصفح -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. في `File Explorer` أنشئ ملفا باسم ** Staking.abi ** باستخدام Staking ABI. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. +3. مع تحديد `Staking.abi` وفتحه في المحرر ، قم بالتبديل إلى قسم `Deploy` و `Run Transactions` في واجهة Remix. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. تحت البيئة environment ، حدد `Injected Web3` وتحت `Account` حدد عنوان المفهرس. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. عيّن عنوان عقد Staking - الصق عنوان عقد Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) بجوار `At address` وانقر على الزر `At address` لتطبيق ذلك. -6. Call `stake()` to stake GRT in the protocol. +6. استدعي `stake()` لوضع GRT في البروتوكول. -7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (اختياري) يجوز للمفهرسين الموافقة على عنوان آخر ليكون المشغل للبنية الأساسية للمفهرس من أجل فصل المفاتيح keys التي تتحكم بالأموال عن تلك التي تقوم بإجراءات يومية مثل التخصيص على subgraphs وتقديم الاستعلامات (مدفوعة). لتعيين المشغل استدعي `setOperator()` بعنوان المشغل. -8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (اختياري) من أجل التحكم في توزيع المكافآت وجذب المفوضين بشكل استراتيجي ، يمكن للمفهرسين تحديث بارامترات التفويض الخاصة بهم عن طريق تحديث indexingRewardCut (أجزاء لكل مليون) ، و queryFeeCut (أجزاء لكل مليون) ، و cooldownBlocks (عدد الكتل). للقيام بذلك ، استدعي `setDelegationParameters()`. المثال التالي يعيّن queryFeeCut لتوزيع 95٪ من خصومات الاستعلام query rebates للمفهرس و 5٪ للمفوضين ، اضبط indexingRewardCut لتوزيع 60٪ من مكافآت الفهرسة للمفهرس و 40٪ للمفوضين ، وقم بتعيين فترة `thecooldownBlocks` إلى 500 كتلة. ``` setDelegationParameters(950000, 600000, 500) ``` -### The life of an allocation +### عمر التخصيص allocation After being created by an indexer a healthy allocation goes through four states. -- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. +- ** نشط ** - بمجرد إنشاء تخصيص على السلسلة (\[allocateFrom()\](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/ Staking.sol # L873)) فهذا يعتبر ** نشطا **. يتم تخصيص جزء من حصة المفهرس الخاصة و / أو الحصة المفوضة لنشر subgraph ، مما يسمح لهم بالمطالبة بمكافآت الفهرسة وتقديم الاستعلامات لنشر ال subgraph. يدير وكيل المفهرس indexer agent إنشاء عمليات التخصيص بناء على قواعد المفهرس. -- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). +- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). عندما يتم إغلاق تخصيص بإثبات صالح للفهرسة (POI) ، يتم توزيع مكافآت الفهرسة الخاصة به على المفهرس والمفوضين (انظر "كيف يتم توزيع المكافآت؟" أدناه لمعرفة المزيد). -- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. +- ** مكتمل** - بمجرد إغلاق التخصيص ، توجد فترة اعتراض يتم بعدها اعتبار التخصيص ** مكتملا** ويكون خصومات رسوم الاستعلام متاحة للمطالبة بها (claim()). وكيل المفهرس indexer agent يراقب الشبكة لاكتشاف التخصيصات ** المكتملة ** ويطالب بها إذا كانت أعلى من العتبة (واختياري) ، ** عتبة-مطالبة-التخصيص **. -- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. +- ** مُطالب به ** - هي الحالة النهائية للتخصيص ؛ وهي التي سلكت مجراها كمخصصة نشطة ، وتم توزيع جميع المكافآت المؤهلة وتمت المطالبة بخصومات رسوم الاستعلام. From 707c99dea77a298c6675577ae13ea8b80cb04f0d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:32 -0500 Subject: [PATCH 217/241] New translations index.json (Spanish) --- pages/es/index.json | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/es/index.json b/pages/es/index.json index 0c98cc47940c..0cd8d2cd65d7 100644 --- a/pages/es/index.json +++ b/pages/es/index.json @@ -3,15 +3,15 @@ "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", "shortcuts": { "aboutTheGraph": { - "title": "About The Graph", + "title": "Acerca de The Graph", "description": "Aprende más sobre The Graph" }, "quickStart": { - "title": "Quick Start", + "title": "Comienzo Rapido", "description": "Jump in and start with The Graph" }, "developerFaqs": { - "title": "Developer FAQs", + "title": "Preguntas Frecuentes de los Desarrolladores", "description": "Frequently asked questions" }, "queryFromAnApplication": { @@ -19,7 +19,7 @@ "description": "Learn to query from an application" }, "createASubgraph": { - "title": "Create a Subgraph", + "title": "Crear un Subgrafo", "description": "Use Studio to create subgraphs" }, "migrateFromHostedService": { From 4c16b28c34b14d499bd9c2d2a5e1be11eab9d070 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:33 -0500 Subject: [PATCH 218/241] New translations index.json (Arabic) --- pages/ar/index.json | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/pages/ar/index.json b/pages/ar/index.json index 6f92f6870a4e..3e48ecb8a612 100644 --- a/pages/ar/index.json +++ b/pages/ar/index.json @@ -3,15 +3,15 @@ "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", "shortcuts": { "aboutTheGraph": { - "title": "About The Graph", + "title": "حول The Graph", "description": "تعرف أكثر حول The Graph" }, "quickStart": { - "title": "Quick Start", + "title": "بداية سريعة", "description": "Jump in and start with The Graph" }, "developerFaqs": { - "title": "Developer FAQs", + "title": "الأسئلة الشائعة للمطورين", "description": "Frequently asked questions" }, "queryFromAnApplication": { @@ -19,7 +19,7 @@ "description": "Learn to query from an application" }, "createASubgraph": { - "title": "Create a Subgraph", + "title": "إنشاء الـ Subgraph", "description": "Use Studio to create subgraphs" }, "migrateFromHostedService": { From 736d6273cebfada765a47c47a29b5a070eb6a2dc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:34 -0500 Subject: [PATCH 219/241] New translations index.json (Japanese) --- pages/ja/index.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/ja/index.json b/pages/ja/index.json index 39c600880dbf..86cb3354c441 100644 --- a/pages/ja/index.json +++ b/pages/ja/index.json @@ -3,11 +3,11 @@ "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", "shortcuts": { "aboutTheGraph": { - "title": "About The Graph", + "title": "The Graphについて", "description": "The Graphについて学ぶ" }, "quickStart": { - "title": "Quick Start", + "title": "クイックスタート", "description": "Jump in and start with The Graph" }, "developerFaqs": { From 08ced872053bb6eee6b21129af6488c9039c219b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:35 -0500 Subject: [PATCH 220/241] New translations index.json (Korean) --- pages/ko/index.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/ko/index.json b/pages/ko/index.json index ccd5906c050e..02032b3326a0 100644 --- a/pages/ko/index.json +++ b/pages/ko/index.json @@ -3,7 +3,7 @@ "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", "shortcuts": { "aboutTheGraph": { - "title": "About The Graph", + "title": "The Graph 소개", "description": "The Graph에 대해 더 알아보기" }, "quickStart": { From 7a53be51ec2565bbe537927eaf799f64d8eda050 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:36 -0500 Subject: [PATCH 221/241] New translations index.json (Chinese Simplified) --- pages/zh/index.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/zh/index.json b/pages/zh/index.json index 915cf97d06a8..05802f75ba78 100644 --- a/pages/zh/index.json +++ b/pages/zh/index.json @@ -3,7 +3,7 @@ "intro": "Learn about The Graph, a decentralized protocol for indexing and querying data from blockchains.", "shortcuts": { "aboutTheGraph": { - "title": "About The Graph", + "title": "关于 The Graph", "description": "了解有关The Graph的更多信息" }, "quickStart": { From d0ba571b04eefd7be9c237a7bbe40d01437745d1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:37 -0500 Subject: [PATCH 222/241] New translations indexing.mdx (Japanese) --- pages/ja/indexing.mdx | 373 +++++++++++++++++++++--------------------- 1 file changed, 186 insertions(+), 187 deletions(-) diff --git a/pages/ja/indexing.mdx b/pages/ja/indexing.mdx index ac9eab223e4f..e02be5538cbc 100644 --- a/pages/ja/indexing.mdx +++ b/pages/ja/indexing.mdx @@ -4,51 +4,51 @@ title: インデクシング import { Difficulty } from '@/components' -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. +インデクサは、グラフネットワークのノードオペレータであり、グラフトークン(GRT)を賭けて、インデックス作成や問い合わせ処理のサービスを提供します。 インデクサーは、そのサービスの対価として、クエリフィーやインデックス作成の報酬を得ることができます。 また、Cobbs-Douglas Rebate Function に基づいて、ネットワーク貢献者全員にその成果に応じて分配される Rebate Pool からも報酬を得ることもできます。 -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. +プロトコルにステークされた GRT は解凍期間が設けられており、インデクサーが悪意を持ってアプリケーションに不正なデータを提供したり、不正なインデックスを作成した場合には、スラッシュされる可能性があります。 また、インデクサーはデリゲーターからステークによる委任を受けて、ネットワークに貢献することができます。 -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +インデクサ − は、サブグラフのキュレーション・シグナルに基づいてインデックスを作成するサブグラフを選択し、キュレーターは、どのサブグラフが高品質で優先されるべきかを示すために GRT をステークします。 消費者(アプリケーションなど)は、インデクサーが自分のサブグラフに対するクエリを処理するパラメータを設定したり、クエリフィーの設定を行うこともできます。 -## FAQ +## よくある質問 -### What is the minimum stake required to be an indexer on the network? +### ネットワーク上のインデクサーになるために必要な最低ステーク量はいくらですか? -The minimum stake for an indexer is currently set to 100K GRT. +インデクサーの最低ステーク量は、現在 100K GRT に設定されています。 -### What are the revenue streams for an indexer? +### インデクサーの収入源は何ですか? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**クエリフィー・リベート** - ネットワーク上でクエリを提供するための手数料です。 この手数料は、インデクサーとゲートウェイ間のステートチャネルを介して支払われます。 ゲートウェイからの各クエリリクエストには手数料が含まれ、対応するレスポンスにはクエリ結果の有効性の証明が含まれます。 -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. +**インデキシング報酬** - プロトコル全体のインフレーションにより生成される年率 3%のインデキシング報酬は、ネットワークのサブグラフ・デプロイメントのインデキシングを行うインデクサーに分配されます。 -### How are rewards distributed? +### 報酬の分配方法は? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +インデキシング報酬は、年間 3%の発行量に設定されているプロトコル・インフレから得られます。 報酬は、それぞれのサブグラフにおけるすべてのキュレーション・シグナルの割合に基づいてサブグラフに分配され、そのサブグラフに割り当てられたステークに基づいてインデクサーに分配されます。 **特典を受けるためには、仲裁憲章で定められた基準を満たす有効なPOI(Proof of Indexing)で割り当てを終了する必要があります。** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). +コミュニティでは、報酬を計算するための数多くのツールが作成されており、それらは[コミュニティガイドコレクション](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c)にまとめられています。 また、[Discord サーバー](https://discord.gg/vtvv7FP)の#delegators チャンネルや#indexers チャンネルでも、最新のツールリストを見ることができます。 -### What is a proof of indexing (POI)? +### POI(proof of indexing)とは何ですか? -POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POI は、インデクサーが割り当てられたサブグラフにインデックスを作成していることを確認するためにネットワークで使用されます。 現在のエポックの最初のブロックに対する POI は、割り当てを終了する際に提出しなければ、その割り当てはインデックス報酬の対象となりません。 あるブロックの POI は、そのブロックまでの特定のサブグラフのデプロイに対するすべてのエンティティストアのトランザクションのダイジェストです。 -### When are indexing rewards distributed? +### インデキシングリワードはいつ配布されますか? -Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +割り当ては、それがアクティブである間、継続的に報酬を発生させます。 報酬はインデクサによって集められ、割り当てが終了するたびに分配されます。 これは、インデクサーが強制的に閉じようとしたときに手動で行うか、28 エポックの後にデリゲーターがインデクサーのために割り当てを終了することができますが、この場合は報酬がミントされません。 28 エポックは最大の割り当て期間です(現在、1 エポックは約 24 時間です) -### Can pending indexer rewards be monitored? +### 保留中のインデクサーの報酬は監視できますか? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. +コミュニティが作成したダッシュボードの多くは保留中の報酬の値を含んでおり、以下の手順で簡単に手動で確認することができます。 -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Etherscan を使った`getRewards()`の呼び出し: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. [メインネット・サブグラフ](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet)にクエリして、全てのアクティブなアロケーションの ID を取得します。 ```graphql query indexerAllocations { - indexer(id: "") { + indexer(id: "") { allocations { activeForIndexer { allocations { @@ -62,57 +62,57 @@ query indexerAllocations { Use Etherscan to call `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Etherscan interface to Rewards contract に移動します。 -* To call `getRewards()`: - - Expand the **10. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +* `getRewards()`を呼び出します + - **10を拡大します。 getRewards**のドロップダウン + - 入力欄に**allocationID**を入力 + - **Query**ボタンをクリック -### What are disputes and where can I view them? +### 争議(disputes)とは何で、どこで見ることができますか? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +インデクサークエリとアロケーションは、期間中に The Graph 上で争議することができます。 争議期間は、争議の種類によって異なります。 クエリ/裁定には7エポックスの紛争窓口があり、割り当てには56エポックスがあります。 これらの期間が経過した後は、割り当てやクエリのいずれに対しても紛争を起こすことはできません。 紛争が開始されると、Fishermenは最低10,000GRTのデポジットを要求され、このデポジットは紛争が最終的に解決されるまでロックされます。 フィッシャーマンとは、紛争を開始するネットワーク参加者のことです。 -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +争議は UI のインデクサーのプロフィールページの`Disputes`タブで確認できます。 -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- 争議が却下された場合、フィッシャーマンが預かった GRT はバーンされ、争議中のインデクサーはスラッシュされません。 +- 争議が引き分けた場合、フィッシャーマンのデポジットは返還され、争議中のインデクサーはスラッシュされることはありません。 +- 争議が受け入れられた場合、フィッシャーマンがデポジットした GRT は返却され、争議中のインデクサーはスラッシュされ、フィッシャーマンはスラッシュされた GRT の 50%を獲得します。 -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +紛争は、UIのインデクサーのプロファイルページの`紛争`タブで確認できます。 -### What are query fee rebates and when are they distributed? +### クエリフィーリベートとは何ですか、またいつ配布されますか? -Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. +クエリフィーは、割り当てが終了するたびにゲートウェイが徴収し、サブグラフのクエリフィーリベートプールに蓄積されます。 リベートプールは、インデクサーがネットワークのために獲得したクエリフィーの量にほぼ比例してステークを割り当てるように促すためのものです。 プール内のクエリフィーのうち、特定のインデクサーに割り当てられる部分はコブス・ダグラス生産関数を用いて計算されます。 インデクサーごとの分配額は、プールへの貢献度とサブグラフでのステークの割り当ての関数となります。 -Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. +割り当てが終了し、争議期間が経過すると、リベートをインデクサーが請求できるようになります。 請求されたクエリフィーのリベートは、クエリフィーカットとデリゲーションプールの比率に基づいて、インデクサーとそのデリゲーターに分配されます。 -### What is query fee cut and indexing reward cut? +### クエリフィーカットとインデキシングリワードカットとは? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. +`クエリフィーカット` と`インデキシングリワードカット` の値は、インデクサーが クールダウンブロックと共に設定できるデリゲーションパラメータで、インデクサーとそのデリゲーター間の GRT の分配を制御するためのものです。 デリゲーションパラメータの設定方法については、[Staking in the Protocol](/indexing#stake-in-the-protocol)の最後のステップを参照してください。 -- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. +- **クエリフィーカット** - サブグラフに蓄積されたクエリフィーリベートのうち、インデクサーに分配される割合です。 これが 95%に設定されていると、割り当てが要求されたときに、インデクサはクエリフィー・リベート・プールの 95%を受け取り、残りの 5%はデリゲータに渡されます。 -- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. +- **インデキシング・リワードカット** - サブグラフに蓄積されたインデキシング・リワードのうち、インデクサーに分配される割合です。 これが 95%に設定されていると、割り当てが終了したときに、インデクサがインデキシング・リワードプールの 95%を受け取り、残りの 5%をデリゲータが分け合うことになります。 -### How do indexers know which subgraphs to index? +### インデクサーはどのサブグラフにインデックスを付けるかをどう見分けるのですか? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +インデクサーは、サブグラフのインデキシングの決定に高度な技術を適用することで差別化を図ることができますが、一般的な考え方として、ネットワーク内のサブグラフを評価するために使用されるいくつかの主要な指標について説明します。 -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **キュレーションシグナル** - 特定のサブグラフに適用されたネットワークキュレーションシグナルの割合は、そのサブグラフへの関心を示す指標となり、特にクエリのボリュームが増加しているブートストラップ段階では有効となります。 -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **コレクティド・クエリフィー** - 特定のサブグラフに対してコレクティド・クエリフィー量の履歴データは、将来的な需要に対する指標となります。 -- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **ステーク量** - 他のインデクサーの行動を監視したり、特定のサブグラフに割り当てられた総ステーク量の割合を見ることで、インデクサーはサブグラフ・クエリの供給側を監視し、ネットワークが信頼を示しているサブグラフや、より多くの供給を必要としているサブグラフを特定することができます。 -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **インデックス報酬のないサブグラフ** - 一部のサブグラフは、主に IPFS などのサポートされていない機能を使用していたり、メインネット外の別のネットワークをクエリしていたりするため、インデックス報酬を生成しません。 インデクシング・リワードを生成していないサブグラフにはメッセージが表示されます。 -### What are the hardware requirements? +### 必要なハードウェアは何ですか? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Small** - いくつかのサブグラフのインデックス作成を開始するのに十分ですが、おそらく拡張が必要になります +- **Standard** - デフォルトのセットアップであり、k8s/terraform の展開マニフェストの例で使用されているものです +- **Medium** - 100 個のサブグラフと 1 秒あたり 200 ~ 500 のリクエストをサポートするプロダクションインデクサー +- **Large** - 現在使用されているすべてのサブグラフのインデックスを作成し、関連するトラフィックのリクエストに対応します | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| @@ -121,48 +121,48 @@ Indexers may differentiate themselves by applying advanced techniques for making | Medium | 16 | 64 | 2 | 32 | 64 | | Large | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an indexer should take? +### インデクサーが取るべきセキュリティ対策は? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. +- **Operator wallet** - オペレーター・ウォレットを設定することは、インデクサーがステークを管理するキーと日々のオペレーションを管理するキーを分離することができるため、重要な予防策となります。 設定方法については [Stake in Protocol](/indexing#stake-in-the-protocol)をご覧ください。 -- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Important**: ポートの公開には注意が必要です。 **管理用ポート**はロックしておくべきです。 これには、以下に示すグラフノードの JSON-RPC とインデクサ管理用のエンドポイントが含まれます。 -## Infrastructure +## インフラストラクチャ -At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. +インデクサーのインフラの中心となるのは、イーサリアムを監視し、サブグラフの定義に従ってデータを抽出・ロードし、[GraphQL API](/about/introduction#how-the-graph-works)として提供するグラフノードです。 グラフノードには、イーサリアムの EVM ノードのエンドポイントと、データを取得するための IPFS ノード、ストア用の PostgreSQL データベース、ネットワークとのやりとりを促進するインデクサーのコンポーネントが接続されている必要があります。 -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. +- **PostgreSQLPostgreSQL データベース** - グラフノードのメインストアで、サブグラフのデータが格納されています。 また、インデクササービスとエージェントは、データベースを使用して、ステートチャネルデータ、コストモデル、およびインデクシングルールを保存します。 -- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. +- **イーサリアムエンドポイント** - Ethereum JSON-RPC API を公開するエンドポイントです。 これは単一のイーサリアムクライアントの形をとっているかもしれませんし、複数のイーサリアムクライアント間でロードバランスをとるような複雑なセットアップになっているかもしれません。 特定のサブグラフには、アーカイブモードやトレース API など、特定のイーサリアムクライアント機能が必要になることを認識しておくことが重要です。 -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS ノード(バージョン 5 未満)** - サブグラフのデプロイメタデータは IPFS ネットワーク上に保存されます。 グラフノードは、サブグラフのデプロイ時に主に IPFS ノードにアクセスし、サブグラフマニフェストと全てのリンクファイルを取得します。 ネットワーク・インデクサーは独自の IPFS ノードをホストする必要はありません。 ネットワーク用の IPFS ノードは、https://ipfs.network.thegraph.com でホストされています。 -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Indexer service** - ネットワークとの必要な外部通信を全て処理します。 コストモデルとインデキシングのステータスを共有し、ゲートウェイからのクエリ要求をグラフノードに渡し、ゲートウェイとのステートチャンネルを介してクエリの支払いを管理します。 -- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. +- **Indexer agent** - ネットワークへの登録、グラフノードへのサブグラフのデプロイ管理、割り当ての管理など、チェーン上のインデクサーのインタラクションを容易にします。 Prometheus メトリクス・サーバー - グラフノードとインデクサー・コンポーネントは、それぞれのメトリクスをメトリクス・サーバーに記録します。 -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +コマンドを実行する前に、[variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf)に目を通し、このディレクトリに`terraform.tfvars` というファイルを作成します(または、前のステップで作成したものを修正します) デフォルトを上書きしたい変数や、値を設定したい変数ごとに、`terraform.tfvars`に設定を入力します。 -### Ports overview +### ポートの概要 -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. +> **ファイアウォール** - インデクサーのサービスのみを公開し、管理ポートとデータベースへのアクセスをロックすることに特に注意を払う必要があります。 グラフノードの JSON-RPC エンドポイント(デフォルトポート:8030)、インデクサー管理 API エンドポイント(デフォルトポート:18000)、Postgres データベースエンドポイント(デフォルトポート:5432)を公開してはいけません。 -#### Graph Node +#### グラフノード -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -170,25 +170,25 @@ Note: To support agile scaling, it is recommended that query and indexing concer | ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | | 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Google Cloud で Terraform を使ってサーバーインフラを構築 -#### Install prerequisites +#### インストールの前提条件 - Google Cloud SDK -- Kubectl command line tool +- Kubectl コマンドラインツール - Terraform -#### Create a Google Cloud Project +#### Google Cloud プロジェクトの作成 -- Clone or navigate to the indexer repository. +- クローンまたはインデクサーリポジトリに移動 -- Navigate to the ./terraform directory, this is where all commands should be executed. +- ./terraform ディレクトリに移動し、ここですべてのコマンドを実行 ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Google Cloud で認証し、新しいプロジェクトを作成 ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Google Cloud Console の\[billing page\](課金ページ) を使用して、新しいプロジェクトの課金を有効にします。 -- Create a Google Cloud configuration. +- Google Cloud の設定を作成します。 ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Google Cloud API の設定 ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- サービスアカウントを作成 ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- データベースと次のステップで作成する Kubernetes クラスター間のピアリングを有効化 ```sh gcloud compute addresses create google-managed-services-default \ @@ -243,13 +243,12 @@ gcloud compute addresses create google-managed-services-default \ --purpose=VPC_PEERING \ --network default \ --global \ - --description 'IP Range for peer networks.' -gcloud services vpc-peerings connect \ + --description 'IP Range for peer networks.' gcloud services vpc-peerings connect \ --network=default \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Terraform 設定ファイルを作成(必要に応じて更新してください) ```sh indexer= @@ -260,24 +259,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### Terraform を使ってインフラを構築 -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +コマンドを実行する前に、[variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf)に目を通し、このディレクトリに`terraform.tfvars`というファイルを作成します(または、前のステップで作成したものを修正します)。 デフォルトを上書きしたい、あるいは値を設定したい各変数について、`terraform.tfvars`に設定を入力します。 -- Run the following commands to create the infrastructure. +- 以下のコマンドを実行して、インフラを作成します。 ```sh -# Install required plugins +# 必要なプラグインのインストール terraform init -# View plan for resources to be created +# 作成されるリソースのプランを見る terraform plan -# Create the resources (expect it to take up to 30 minutes) +# リソースの作成(最大で30分程度かかる見込みです) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +`kubectl apply -k $dir`ですべてのリソースをデプロイします。 ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +284,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the indexer +#### インデクサー用の Kubernetes コンポーネントの作成 -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- `k8s/overlays`ディレクトリを新しいディレクトリ`$dir,`にコピーし、`$dir/kustomization.yaml`内の`bases`エントリが`k8s/base`ディレクトリを指すように調整します。 -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- `$dir` にあるすべてのファイルを読み、コメントに示されている値を調整します。 Deploy all resources with `kubectl apply -k $dir`. -### Graph Node +### グラフノード -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[グラフノード](https://github.com/graphprotocol/graph-node)はオープンソースの Rust 実装で、Ethereum ブロックチェーンをイベントソースにして、GraphQL エンドポイントでクエリ可能なデータストアを決定論的に更新します。 開発者は、サブグラフを使ってスキーマを定義し、ブロックチェーンから供給されるデータを変換するためのマッピングセットを使用します。 グラフノードは、チェーン全体の同期、新しいブロックの監視、GraphQL エンドポイント経由での提供を処理します。 -#### Getting started from source +#### ソースからのスタート -#### Install prerequisites +#### インストールの前提条件 - **Rust** @@ -307,7 +306,7 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Ubuntu ユーザーのための追加要件** - グラフノードを Ubuntu 上で動作させるためには、いくつかの追加パッケージが必要になります。 ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config @@ -315,7 +314,7 @@ sudo apt-get install -y clang libpg-dev libssl-dev pkg-config #### Setup -1. Start a PostgreSQL database server +1. PostgreSQL データベースサーバを起動します。 ```sh initdb -D .postgres @@ -323,9 +322,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. [グラフノード Graph Node](https://github.com/graphprotocol/graph-node)のリポジトリをクローンし、cargo build を実行してソースをビルドします。 -3. Now that all the dependencies are setup, start the Graph Node: +3. 全ての依存関係の設定が完了したら、グラフノードを起動します: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +333,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Docker の使用 -#### Prerequisites +#### 前提条件 -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **イーサリアムノード** - デフォルトでは、docker compose setup は mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545)を使ってホストマシン上のイーサリアムノードに接続します。 このネットワーク名と URL は、`docker-compose.yaml`を更新することで置き換えることができます。 #### Setup -1. Clone Graph Node and navigate to the Docker directory: +1. Graph Node をクローンし、Docker ディレクトリに移動します。 ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: +2. Linux ユーザーのみ - 付属のスクリプトを使って、`docker-compose.yaml`の中で`host.docker.internal`の代わりにホストの IP アドレスを使用します: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. Ethereum のエンドポイントに接続し、ローカルの Graph Node を起動します: ```sh docker-compose up ``` -### Indexer components +### インデクサーコンポーネント -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: +ネットワークへの参加を成功させるためには、ほぼ常に監視と対話を行う必要があるため、Indexers のネットワークへの参加を促進するための一連の Typescript アプリケーションを構築しました。 インデクサーには 3 つのコンポーネントがあります: -- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. +- **Indexer agent** - ネットワークとインデクサー自身のインフラを監視し、どのサブグラフ・デプロイメントがインデキシングされ、チェーンに割り当てられるか、またそれぞれにどれだけの量が割り当てられるかを管理します。 -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - 外部に公開する必要のある唯一のコンポーネントで、サブグラフのクエリをグラフノードに渡し、クエリの支払いのための状態チャンネルを管理し、重要な意思決定情報をゲートウェイなどのクライアントに共有します。 -- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. +- **インデクサー CLI** - インデクサーエージェントを管理するためのコマンドラインインターフェースです。 インデクサーがコストモデルやインデクシングルールを管理するためのもの。 -#### Getting started +#### はじめに -The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! +インデクサーエージェントとインデクサーサービスは、グラフノードインフラストラクチャーと同居している必要があります。 ここでは、NPM パッケージやソースを使ってベアメタル上で実行する方法と、Google Cloud Kubernetes Engine 上で kubernetes や docker を使って実行する方法を説明します。 これらの設定例があなたのインフラに適用できない場合は、コミュニティガイドを参照するか、[Discord](https://thegraph.com/discord)でお問い合わせください。 インデクサーコンポーネントを起動する前に、[プロトコルのステーク](/indexing#stake-in-the-protocol) を忘れないでください。 -#### From NPM packages +#### NPM パッケージから ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +397,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### ソース ```sh # From Repo root directory @@ -418,16 +417,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Docker の使用 -- Pull images from the registry +- レジストリからイメージを引き出す ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +**注**: コンテナの起動後、インデクサーサービスは[http://localhost:7600](http://localhost:7600)でアクセスでき、インデクサーエージェントは[http://localhost:18000/](http://localhost:18000/)で インデクサー管理 API を公開しているはずです。 ```sh # Indexer service @@ -442,24 +441,24 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- コンポーネントの実行 ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). +[Google Cloud で Terraform を使ってサーバーインフラを構築するのセクション ](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) を参照してください。 -#### Using K8s and Terraform +#### K8s と Terraform の使用 -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section +Indexer CLI は、[`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli)のプラグインで、ターミナルから`graph indexer`でアクセスできます。 -#### Usage +#### 使用方法 -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **注**:全てのランタイム設定変数は、起動時にコマンドのパラメーターとして適用するか、`COMPONENT_NAME_VARIABLE_NAME`(例:`INDEXER_AGENT_ETHEREUM`)という形式の環境変数を使用することができます。 -#### Indexer agent +#### インデクサーエージェント ```sh graph-indexer-agent start \ @@ -487,7 +486,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### インデクサーサービス ```sh SERVER_HOST=localhost \ @@ -513,44 +512,44 @@ graph-indexer-service start \ | pino-pretty ``` -#### Indexer CLI +#### インデクサー CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +インデクサーがプロトコルに GRT をステークすると、[indexer components](/indexing#indexer-components)を起動し、ネットワークとのやりとりを始めることができます。 ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using indexer CLI +#### Indexer CLI によるインデクサー管理 -The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. +インデクサエージェントは、インデクサーに代わって自律的にネットワークと対話するために、インデクサーからの入力を必要とします。 インデクサー・エージェントの動作を定義するためのメカニズムが**インデキシングルール**です。 インデクサーは、**インデキシングルール**を使用して、インデックスを作成してクエリを提供するサブグラフを選択するための特定の戦略を適用することができます。 ルールは、エージェントが提供する GraphQL API を介して管理され、Indexer Management API と呼ばれています。 **Indexer Management API**を操作するための推奨ツールは、 **Graph CLI**の拡張である**Indexer CLI**です。 -#### Usage +#### 使用方法 -The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**Indexer CLI**は、通常ポート・フォワーディングを介してインデクサー・エージェントに接続するため、CLI が同じサーバやクラスタ上で動作する必要はありません。 ここでは CLI について簡単に説明します。 -- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - インデクサー管理 API に接続します。 通常、サーバーへの接続はポートフォワーディングによって開かれ、CLI をリモートで簡単に操作できるようになります。 (例:`kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. +- `graph indexer rules get [options] ...]` - 1 つまたは複数のインデキシングルールを取得します。 ``に `all` を指定すると全てのルールを取得し、`global` を指定するとグローバルなデフォルトを取得します。 追加の引数`--merged` を使用すると、ディプロイメント固有のルールをグローバル ルールにマージするように指定できます。 これがインデクサー・エージェントでの適用方法です。 -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - 1 つまたは複数のインデキシング規則を設定します。 -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - 利用可能な場合はサブグラフ配置のインデックス作成を開始し、`decisionBasis`を`always`に設定するので、インデクサー・エージェントは常にインデキシングを選択します。 グローバル ルールが always に設定されている場合、ネットワーク上のすべての利用可能なサブグラフがインデックス化されます。 -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - 配置のインデックス作成を停止し、`decisionBasis`を never に設定することで、インデックスを作成する配置を決定する際にこの配置をスキップします。 -- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` - 配置の`thedecisionBasis` を`rules`に設定し、インデクサーエージェントがインデキシングルールを使用して、この配置にインデックスを作成するかどうかを決定するようにします。 -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +出力にルールを表示するすべてのコマンドは、`-output`引数を使用して、サポートされている出力形式(`table`, `yaml`, and `json`) の中から選択できます。 -#### Indexing rules +#### インデキシングルール -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +インデキシングルールは、グローバルなデフォルトとして、または ID を使用して特定のサブグラフデプロイメントに適用できます。 `deployment`と`decisionBasis`フィールドは必須で、その他のフィールドはすべてオプションです。 インデキシングルールが`decisionBasis`として`rules` を持つ場合、インデクサー・エージェントは、そのルール上の非 NULL の閾値と、対応する配置のためにネットワークから取得した値を比較します。 サブグラフデプロイメントがいずれかのしきい値以上(または以下)の値を持つ場合、それはインデキシングのために選択されます。 -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +例えば、グローバル ルールの`minStake`が**5**(GRT) の場合、5(GRT) 以上のステークが割り当てられているサブグラフデプロイメントは、インデックスが作成されます。 しきい値ルールには、 `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`があります。 -Data model: +データモデル ```graphql type IndexingRule { @@ -573,17 +572,17 @@ IndexingDecisionBasis { } ``` -#### Cost models +#### コストモデル -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. +コストモデルは、マーケットやクエリ属性に基づいて、クエリの動的な価格設定を行います。 インデクサーサービスは、クエリに応答する予定の各サブグラフのコストモデルをゲートウェイと共有します。 一方、ゲートウェイはコストモデルを使用して、クエリごとにインデクサーの選択を決定し、選択されたインデクサーと支払いの交渉を行います。 #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Agora 言語は、クエリのコストモデルを宣言するための柔軟なフォーマットを提供します。 Agora のコストモデルは、GraphQL クエリのトップレベルのクエリごとに順番に実行される一連のステートメントです。 各トップレベルのクエリに対して、それにマッチする最初のステートメントがそのクエリの価格を決定します。 -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +ステートメントは、GraphQL クエリのマッチングに使用される述語と、評価されると decimal GRT でコストを出力するコスト式で構成されます。 クエリの名前付き引数の位置にある値は、述語の中に取り込まれ、式の中で使用されます。 また、グローバルを設定し、式のプレースホルダーとして代用することもできます。 -Example cost model: +上記モデルを用いたクエリのコスト計算例: ``` # This statement captures the skip value, @@ -596,75 +595,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +コストモデルの例: | Query | Price | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### Applying the cost model +#### コストモデルの適用 -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +コストモデルは Indexer CLI を通じて適用され、それをインデクサー・エージェントの Indexer Management API に渡してデータベースに格納します。 その後、インデクサーサービスがこれを受け取り、ゲートウェイから要求があるたびにコスト・モデルを提供します。 ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## ネットワークとのインタラクション -### Stake in the protocol +### プロトコルへのステーク -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ +インデクサーとしてネットワークに参加するための最初のステップは、プロトコルを承認し、資金を拠出し、(オプションで)日常的なプロトコルのやり取りのためにオペレーターアドレスを設定することです。 \_ **注**: 本説明書ではコントラクトのやり取りに Remix を使用しますが、お好みのツールを自由にお使いください([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account)などが知られています) -Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. +健全なアロケーションは、インデクサーによって作成された後、4 つの状態を経ます。 -#### Approve tokens +#### トークンの承認 -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. ブラウザで[Remix app](https://remix.ethereum.org/)を開きます。 -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. `File Explorer`で**GraphToken.abi**というファイルを作成し、 [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json)を指定します。 -3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. +3. `GraphToken.abi`を選択してエディタで開いた状態で、Remix のインターフェースの Deploy and `Run Transactions` セクションに切り替えます。 -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. 環境から[`Injected Web3`] を選択し、`Account`でインデクサーアドレスを選択します。 -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. GraphToken のコントラクトアドレスの設定 - `At Address`の横に GraphToken のコントラクトアドレス(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) を貼り付け、`At Address`ボタンをクリックして適用します。 -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. `approve(spender, amount)`関数を呼び出し、ステーキング契約を承認します。 `spender`にはステーキングコントラクトアドレス(`0xF55041E37E12cD407ad00CE2910B8269B01263b9`)を、`amount`にはステークするトークン(単位:wei)を記入します。 -#### Stake tokens +#### トークンをステークする -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. ブラウザで[Remix app](https://remix.ethereum.org/) を開きます。 -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. `File Explorer`で**Staking.abi**という名前のファイルを作成し、Staking ABI を指定します。 -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. +3. エディタで`Staking.abi`を選択して開いた状態で、Remix インターフェースの`Deploy` and `Run Transactions`セクションに切り替えます。 -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. 環境から[`Injected Web3`] を選択し、`Account`でインデクサーアドレスを選択します。 -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Staking contract address の設定 - `At Address`の横に Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) を貼り付け、 `At Address`ボタンをクリックして適用します。 -6. Call `stake()` to stake GRT in the protocol. +6. `stake()`を呼び出して、GRT をプロトコルにステークします。 -7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (オプション)インデクサーは、資金を管理する鍵と、サブグラフへの割り当てや(有料の)クエリの提供などの日常的な動作を行う鍵とを分離するために、別のアドレスをインデクサインフラストラクチャのオペレーターとして承認することができます。 オペレーターを設定するには、オペレーターのアドレスを指定して`setOperator()`をコールします。 -8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (オプション) 報酬の分配を制御し、デリゲータを戦略的に引き付けるために、 インデクサーは indexingRewardCut (parts per million)、 queryFeeCut (parts per million)、 cooldownBlocks (number of blocks) を更新することで、 デリゲーションパラメータを更新することができます。 これを行うには`setDelegationParameters()`をコールします。 次の例では、クエリフィーカットをクエリリベートの 95%をインデクサーに、5%をデリゲーターに分配するように設定し、インデクサーリワードカットをインデキシング報酬の 60%をインデクサーに、40%をデリゲーターに分配するよう設定し、`thecooldownBlocks` 期間を 500 ブロックに設定しています。 ``` setDelegationParameters(950000, 600000, 500) ``` -### The life of an allocation +### アロケーションの寿命 -After being created by an indexer a healthy allocation goes through four states. +インデクサーによって作成された後、健全なアロケーションは4つの状態を経ます。 -- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. +- **Active**- オンチェーンでアロケーションが作成されると(allocateFrom())、それは**active**であるとみなされます。 インデクサー自身やデリゲートされたステークの一部がサブグラフの配置に割り当てられ、これによりインデクシング報酬を請求したり、そのサブグラフの配置のためにクエリを提供したりすることができます。 インデクサエージェントは、インデキシングルールに基づいて割り当ての作成を管理します。 -- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). +- **Closed** - インデクサーは、1 エポックが経過した時点で自由に割り当てをクローズすることができます([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) また、インデクサエージェントは、**maxAllocationEpochs**(現在は 28 日)が経過した時点で自動的に割り当てをクローズします。 割り当てが有効な POI(Proof of Indexing)とともにクローズされると、そのインデクサー報酬がインデクサーとそのデリゲーターに分配されます(詳細は下記の「報酬の分配方法」を参照してください) -- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. +- **Finalized** - 割り当てがクローズすると、争議期間が設けられます。 その後、割り当てが**finalized**したとみなされ、クエリフィーのリベートを請求することができます(claim()) インデクサーエージェントは、ネットワークを監視して**finalized** した割り当てを検出し、設定可能な(オプションの)しきい値 **—-allocation-claim-threshold**を超えていれば、それを請求できます。 -- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. +- **請求** - アロケーションの最終状態で、アクティブなアロケーションとしての期間が終了し、全ての適格な報酬が配布され、クエリ料の払い戻しが請求されます。 From a5752dc2691b1e4c154186274cec33e7ef38c128 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:38 -0500 Subject: [PATCH 223/241] New translations indexing.mdx (Spanish) --- pages/es/indexing.mdx | 390 +++++++++++++++++++++--------------------- 1 file changed, 195 insertions(+), 195 deletions(-) diff --git a/pages/es/indexing.mdx b/pages/es/indexing.mdx index 398c746cbd93..2485f360b904 100644 --- a/pages/es/indexing.mdx +++ b/pages/es/indexing.mdx @@ -4,47 +4,47 @@ title: indexación import { Difficulty } from '@/components' -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function. +Los Indexadores son operadores de nodos en The Graph Network que stakean Graph Tokens (GRT) para proporcionar servicios de indexación y procesamiento de consultas. Los Indexadores obtienen tarifas de consulta y recompensas de indexación por sus servicios. También obtienen ganacias de un pool de reembolso que se comparte con todos los contribuyentes de la red en proporción a su trabajo, siguiendo la idea de Function Rebate por parte de Cobbs-Douglas. -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network. +Los GRT que se bloquean (en stake) dentro del protocolo están sujetos a un período de descongelación y pueden ser reducidos si los Indexadores son maliciosos y entregan datos incorrectos a las aplicaciones o si indexan información incorrecta. A los Indexadores también se les puede asignar participaciones por parte de los Delegadores, quienes buscan contribuir a la red. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Los Indexadores seleccionan subgrafos para indexar basados en la señal de curación del subgrafo, donde los curadores acuñan sus GRT para indicar qué subgrafos son de mejor calidad y deben tener prioridad para ser indexados. Los consumidores (por ejemplo, aplicaciones, clientes) también pueden establecer parámetros para los cuales los Indexadores procesan consultas para sus subgrafos y establecen preferencias para el precio asignado a cada consulta. -## FAQ +## Preguntas frecuentes -### What is the minimum stake required to be an indexer on the network? +### ¿Cuál es la participación mínima requerida (stake) para ser Indexador en la red? -The minimum stake for an indexer is currently set to 100K GRT. +El stake mínimo para un indexador es actualmente de 100.000 GRT. -### What are the revenue streams for an indexer? +### ¿Cuáles son las fuentes de ingresos de un indexador? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +** Descuentos en las tarifas de consulta**: Pagos por atender consultas en la red. Estos pagos están asignados a través de unos canales entre el Indexador y un gateway. Cada solicitud de consulta de una puerta de enlace contiene un pago y la respuesta correspondiente una prueba de la validez del resultado de la consulta. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to indexers who are indexing subgraph deployments for the network. +**Recompensas de indexación**: Generadas a través de una inflación anual del protocolo equivalente al 3% , las recompensas de indexación se distribuyen a los indexadores que indexan las implementaciones de subgrafos para la red. -### How are rewards distributed? +### ¿Cómo se distribuyen las recompensas? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Las recompensas de indexación provienen de la inflación del protocolo, que se establece en una emisión anual del 3%. Se distribuyen en subgrafos según la proporción de toda la señal de curación en cada uno, luego se distribuyen proporcionalmente a los indexadores en función de su stake asignado en ese subgrafo. ** Una asignación debe cerrarse con una prueba de indexación (POI) válida que cumpla con los estándares establecidos por la carta de arbitraje para ser elegible dentro de las recompensas.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #delegators and #indexers channels on the [Discord server](https://discord.gg/vtvv7FP). +La comunidad ha creado numerosas herramientas para calcular las recompensas; encontrarás una colección de ellos organizados en la [colección de herramientas creeadas por la Comunidad](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). También puedes encontrar una lista actualizada de herramientas en los canales de #delegators e #indexers en el [ servidor de Discord](https://discord.gg/vtvv7FP). -### What is a proof of indexing (POI)? +### ¿Qué es una prueba de indexación (POI)? -POIs are used in the network to verify that an indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POI se utilizan en la red para verificar que un indexador está indexando los subgrafos en los que ha asignado. Se debe enviar un POI para el primer bloque del ciclo actual al cerrar una asignación para que esa asignación sea elegible para las recompensas de indexación. Un POI para un bloque, es un resumen de todas las transacciones de las entidades involucradas en la implementación de un subgrafo específico e incluyendo ese bloque. -### When are indexing rewards distributed? +### ¿Cuándo se distribuyen las recompensas de indexación? -Allocations are continuously accruing rewards while they're active. Rewards are collected by the indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the indexer wants to force close them, or after 28 epochs a delegator can close the allocation for the indexer, but this results in no rewards being minted. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Las asignaciones acumulan recompensas continuamente mientras están activas. Los indexadores recogen las recompensas y las distribuyen cada vez que se cierran sus asignaciones. Eso sucede ya sea manualmente, siempre que el indexador quiera forzar el cierre, o después de 28 ciclos un delegador puede cerrar la asignación para el indexador, pero esto da como resultado que no se generen recompensas. 28 ciclos es la duración máxima de la asignación (en este momento, un ciclo dura aproximadamente 24 h). -### Can pending indexer rewards be monitored? +### ¿Se pueden monitorear las recompensas pendientes del indexador? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) function that can be used to check the pending rewards for a specific allocation. +Muchos de los paneles creados por la comunidad incluyen valores de recompensas pendientes y se pueden verificar fácilmente de forma manual siguiendo estos pasos: -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Usa Etherscan para llamar `getRewards()`: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Consulta el [ subgrafo de la red principal](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) para obtener los ID de todas las asignaciones activas: ```graphql query indexerAllocations { @@ -60,135 +60,135 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Utiliza Etherscan para solicitar el `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Navega a través de [la interfaz de Etherscan para ver el contrato de recompensas](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -* To call `getRewards()`: - - Expand the **10. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +* Para llamar `getRewards()`: + - Eleva el **10. getRewards** dropdown. + - Introduce el **allocationID** en la entrada. + - Presiona el botón de **Query**. -### What are disputes and where can I view them? +### ¿Qué son las disputas y dónde puedo verlas? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Las consultas y asignaciones del Indexador se pueden disputar en The Graph durante el período de disputa. El período de disputa varía según el tipo de disputa. Las consultas tienen una ventana de disputa de 7 ciclos, mientras que las asignaciones tienen 56 ciclos. Una vez transcurridos estos períodos, no se pueden abrir disputas contra asignaciones o consultas. Cuando se abre una disputa, los Fishermen requieren un depósito mínimo de 10,000 GRT, que permanecerá bloqueado hasta que finalice la disputa y se haya dado una resolución. Los Fishermen (o pescadores) son todos los participantes de la red que abren disputas. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Las disputas se pueden ver en la interfaz de usuario, en la página de perfil de un Indexador, en la pestaña `Disputas`. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Si se rechaza la disputa, los GRT depositados por los Fishermen se quemarán y el Indexador en disputa no será recortado. +- Si la disputa se resuelve como empate, se devolverá el depósito de los Fishermen y no se recortará al indexador en disputa. +- Si la disputa es aceptada, los GRT depositados por los Fishermen será devuelto, el Indexador en disputa será recortado y los Fishermen ganarán el 50% de los GRT recortados. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +Las disputas se podran visualizar en la interfaz correspondiente al perfil del indexador en la pestaña de `disputas`. -### What are query fee rebates and when are they distributed? +### ¿Qué son los reembolsos de tarifas de consulta y cuándo se distribuyen? -Query fees are collected by the gateway whenever an allocation is closed and accumulated in the subgraph's query fee rebate pool. The rebate pool is designed to encourage Indexers to allocate stake in rough proportion to the amount of query fees they earn for the network. The portion of query fees in the pool that are allocated to a particular indexer is calculated using the Cobbs-Douglas Production Function; the distributed amount per indexer is a function of their contributions to the pool and their allocation of stake on the subgraph. +La puerta de enlace (gateway) recoge las tarifas de consulta cada vez que se cierra una asignación y se acumulan en el pool de reembolsos de tarifas de consulta del subgrafo. El pool de reembolsos está diseñado para alentar a los Indexadores a asignar participación en una proporción aproximada del monto de tarifas de consulta que ganan para la red. La parte de las tarifas de consulta en el pool que se asigna a un indexador en particular se calcula mediante la Función de Producción Cobbs-Douglas; el monto distribuido por indexador es una función de sus contribuciones al pool y su asignación de participación (stake) en el subgrafo. -Once an allocation has been closed and the dispute period has passed the rebates are available to be claimed by the indexer. Upon claiming, the query fee rebates are distributed to the indexer and their delegators based on the query fee cut and the delegation pool proportions. +Una vez que se ha cerrado una asignación y ha pasado el período de disputa, los reembolsos están disponibles para ser reclamados por el indexador. Al reclamar, los reembolsos de la tarifa de consulta se distribuyen al indexador y sus delegadores en función del recorte de la tarifa de consulta y las proporciones del pool de delegación. -### What is query fee cut and indexing reward cut? +### ¿Qué es el recorte de la tarifa de consulta y el recorte de la recompensa de indexación? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the indexer and their delegators. See the last steps in [Staking in the Protocol](/indexing#stake-in-the-protocol) for instructions on setting the delegation parameters. +Los valores `queryFeeCut` y `indexingRewardCut` son parámetros de delegación que el Indexador puede establecer junto con cooldownBlocks para controlar la distribución de GRT entre el indexador y sus delegadores. Consulta los últimos pasos en [Staking en el protocolo](/indexing#stake-in-the-protocol) para obtener instrucciones sobre cómo configurar los parámetros de delegación. -- **queryFeeCut** - the % of query fee rebates accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the query fee rebate pool when an allocation is claimed with the other 5% going to the delegators. +- **queryFeeCut**: el porcentaje de los reembolsos de tarifas de consulta acumulados en un subgrafo que se distribuirá al indexador. Si se establece en 95%, el indexador recibirá el 95% del pool de reembolsos de la tarifa de consulta cuando se reclame una asignación y el otro 5% irá a los delegadores. -- **indexingRewardCut** - the % of indexing rewards accumulated on a subgraph that will be distributed to the indexer. If this is set to 95%, the indexer will receive 95% of the indexing rewards pool when an allocation is closed and the delegators will split the other 5%. +- **indexingRewardCut**: el porcentaje de las recompensas de indexación acumuladas en un subgrafo que se distribuirá al indexador. Si se establece en 95%, el indexador recibirá el 95% del pool de recompensas de indexación cuando se cierre una asignación y los delegadores dividirán el otro 5%. -### How do indexers know which subgraphs to index? +### ¿Cómo saben los indexadores qué subgrafos indexar? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Los indexadores pueden diferenciarse aplicando técnicas avanzadas para tomar decisiones de indexación de subgrafos, pero para dar una idea general, discutiremos varias métricas clave que se utilizan para evaluar subgrafos en la red: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Señal de curación **: la proporción de señal de curación de la red aplicada a un subgrafo en particular es un buen indicador del interés en ese subgrafo, especialmente durante la fase de lanzamiento cuando el volumen de consultas aumenta. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- ** Tarifas de consulta recogidas**: Los datos históricos del volumen de tarifas de consulta recogidas para un subgrafo específico son un buen indicador de la demanda futura. -- **Amount staked** - Monitoring the behavior of other indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- ** Cantidad en staking**: Monitorear el comportamiento de otros indexadores u observar las proporciones de la participación total asignada a subgrafos específicos puede permitirle al indexador monitorear el lado de la oferta en busca de consultas de subgrafos para identificar subgrafos que los que la red muestra confianza o subgrafos que pueden mostrar una necesidad de mayor suministro. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- ** Subgrafos sin recompensas de indexación**: Algunos subgrafos no generan recompensas de indexación principalmente porque utilizan funciones no compatibles como IPFS o porque están consultando otra red fuera de la red principal. Verás un mensaje en un subgrafo si no genera recompensas de indexación. -### What are the hardware requirements? +### ¿Cuáles son los requisitos de hardware? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Pequeño**: Lo suficiente como para comenzar a indexar varios subgrafos, es probable que deba expandirse. +- **Estándar**: Configuración predeterminada, esto es lo que se usa en los manifiestos de implementación de k8s/terraform de ejemplo. +- **Medio**: Indexador de producción que admite 100 subgrafos y 200-500 solicitudes por segundo. +- **Grande**: Preparado para indexar todos los subgrafos utilizados actualmente y atender solicitudes para el tráfico relacionado. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Configuración | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| ------------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Pequeño | 4 | 8 | 1 | 4 | 16 | +| Estándar | 8 | 30 | 1 | 12 | 48 | +| Medio | 16 | 64 | 2 | 32 | 64 | +| Grande | 72 | 468 | 3,5 | 48 | 184 | -### What are some basic security precautions an indexer should take? +### ¿Cuáles son algunas de las precauciones de seguridad básicas que debe tomar un indexador? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing#stake-in-the-protocol) for instructions. +- ** Billetera del operador**: Configurar una billetera del operador es una precaución importante porque permite que un indexador mantenga la separación entre sus claves que controlan la participación (stake) y las que tienen el control de las operaciones diarias. Consulta [Participación en el Protocolo](/indexing#stake-in-the-protocol) para obtener instrucciones. -- **Firewall** - Only the indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall**: Solo el servicio indexador debe exponerse públicamente y se debe prestar especial atención al bloqueo de los puertos de administración y el acceso a la base de datos: el punto final JSON-RPC de Graph Node (puerto predeterminado: 8030), el punto final de la API de administración del indexador (puerto predeterminado: 18000) y el punto final de la base de datos de Postgres (puerto predeterminado: 5432) no deben estar expuestos. -## Infrastructure +## Infraestructura -At the center of an indexer's infrastructure is the Graph Node which monitors Ethereum, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/introduction#how-the-graph-works). The Graph Node needs to be connected to Ethereum EVM node endpoints, and IPFS node for sourcing data; a PostgreSQL database for its store; and indexer components which facilitate its interactions with the network. +En el centro de la infraestructura de un indexador está el Graph Node que monitorea Ethereum, extrae y carga datos según una definición de subgrafo y lo sirve como una [GraphQL API](/about/introduction#how-the-graph-works). El Graph Node debe estar conectado a los puntos finales del nodo Ethereum EVM y al nodo IPFS para obtener datos; una base de datos PostgreSQL para su tienda; y componentes del indexador que facilitan sus interacciones con la red. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The indexer service and agent also use the database to store state channel data, cost models, and indexing rules. +- **Base de datos PostgreSQL**: El almacén principal para Graph Node, aquí es donde se almacenan los datos del subgrafo. El servicio y el agente del indexador también utilizan la base de datos para almacenar datos del canal de estado, modelos de costos y reglas de indexación. -- **Ethereum endpoint ** - An endpoint that exposes an Ethereum JSON-RPC API. This may take the form of a single Ethereum client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular Ethereum client capabilities such as archive mode and the tracing API. +- **Endpoint de Ethereum**: Un punto final que expone una API Ethereum JSON-RPC. Esto puede tomar la forma de un solo cliente Ethereum o podría ser una configuración más compleja que equilibre la carga en varios. Es importante tener en cuenta que ciertos subgrafos requerirán capacidades particulares del cliente Ethereum, como el modo de archivo y la API de seguimiento. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **Nodo IPFS (versión inferior a 5)**: Los metadatos de implementación de Subgrafo se almacenan en la red IPFS. El Graph Node accede principalmente al nodo IPFS durante la implementación del subgrafo para obtener el manifiesto del subgrafo y todos los archivos vinculados. Los indexadores de la red no necesitan alojar su propio nodo IPFS, un nodo IPFS para la red está alojado en https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Servicio de indexador**: Gestiona todas las comunicaciones externas necesarias con la red. Comparte modelos de costos y estados de indexación, transfiere solicitudes de consulta desde la puerta de acceso (gateway) a Graph Node y administra los pagos de consultas a través de canales de estado con la puerta de acceso. -- **Indexer agent** - Facilitates the indexers interactions on chain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server. +- **Agente indexador**: Facilita las interacciones de los indexadores en cadena, incluido el registro en la red, la gestión de implementaciones de subgrafos en sus Graph Node y la gestión de asignaciones. Servidor de métricas de Prometheus: los componentes Graph Node y el Indexer registran sus métricas en el servidor de métricas. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Nota: Para admitir el escalado ágil, se recomienda que las inquietudes de consulta e indexación se separen entre diferentes conjuntos de nodos: nodos de consulta y nodos de índice. -### Ports overview +### Resumen de puertos -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the indexer management endpoints detailed below. +> **Importante**: Ten cuidado con la exposición de los puertos públicamente; los **puertos de administración** deben mantenerse bloqueados. Esto incluye el Graph Node JSON-RPC y los extremos de administración del indexador que se detallan a continuación. #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| ------ | ---------------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------- | +| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | +| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | +| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | -#### Indexer Service +#### Servicio de Indexador -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| ------ | ----------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | Servidor HTTP GraphQL
(para consultas de subgrafo pagadas) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Métricas de Prometheus | /metrics | --metrics-port | - | -#### Indexer Agent +#### Agente Indexador -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | ------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de
Entorno | +| ------ | ----------------------------- | ----- | ------------------------- | --------------------------------------- | +| 8000 | API de gestión de indexadores | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Configurar la infraestructura del servidor con Terraform en Google Cloud -#### Install prerequisites +#### Instalar requisitos previos -- Google Cloud SDK -- Kubectl command line tool +- SDK de Google Cloud +- Herramienta de línea de comandos de Kubectl - Terraform -#### Create a Google Cloud Project +#### Crear un proyecto de Google Cloud -- Clone or navigate to the indexer repository. +- Clona o navega hasta el repositorio del indexador. -- Navigate to the ./terraform directory, this is where all commands should be executed. +- Navega al directorio ./terraform, aquí es donde se deben ejecutar todos los comandos. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Autentícate con Google Cloud y crea un nuevo proyecto. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Usa la \[página de facturación\](página de facturación) de Google Cloud Console para habilitar la facturación del nuevo proyecto. -- Create a Google Cloud configuration. +- Crea una configuración de Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Habilita las API requeridas de Google Cloud. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Crea una cuenta de servicio. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Habilita el emparejamiento entre la base de datos y el clúster de Kubernetes que se creará en el siguiente paso. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Crea un archivo de configuración mínimo de terraform (actualiza según sea necesario). ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### Usa Terraform para crear infraestructura -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +Antes de ejecutar cualquier comando, lee [ variables.tf ](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) y crea un archivo `terraform.tfvars` en este directorio (o modifica el que creamos en el último paso). Para cada variable en la que deseas anular el valor predeterminado, o donde necesites establecer un valor, ingresa una configuración en `terraform.tfvars`. -- Run the following commands to create the infrastructure. +- Ejecuta los siguientes comandos para crear la infraestructura. ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +Implementa todos los recursos con `kubectl apply -k $dir`. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the indexer +#### Crea los componentes de Kubernetes para el indexador -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- Copia el directorio `k8s/overlays` a un nuevo directorio `$dir,` y ajusta la entrada `bases` en `$dir/kustomization.yaml` para que apunte al directorio `k8s/base`. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- Lee todos los archivos en `$dir` y ajusta cualquier valor como se indica en los comentarios. -Deploy all resources with `kubectl apply -k $dir`. +Despliega todas las fuentes usando `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the block chain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) es una implementación de Rust de código abierto que genera eventos en la blockchain Ethereum para actualizar de manera determinista un almacén de datos que se puede consultar a través del Punto final GraphQL. Los desarrolladores usan subgrafos para definir su esquema, y ​​un conjunto de mapeos para transformar los datos provenientes de la blockchain y Graph Node maneja la sincronización de toda la cadena, monitorea nuevos bloques y sirve a través de un punto final GraphQL. -#### Getting started from source +#### Empezar desde el origen -#### Install prerequisites +#### Instalar Prerrequisitos - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Requisitos adicionales para usuarios de Ubuntu**: Para ejecutar un nodo Graph en Ubuntu, es posible que se necesiten algunos paquetes adicionales. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Configurar -1. Start a PostgreSQL database server +1. Inicia un servidor de base de datos PostgreSQL ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Clona el repositorio [Graph Node](https://github.com/graphprotocol/graph-node) y crea la fuente ejecutando `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Ahora que todas las dependencias están configuradas, inicia el nodo Graph (Graph Node): ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Empezar usando Docker -#### Prerequisites +#### Prerrequisitos -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- ** nodo Ethereum**: De forma predeterminada, la configuración de composición de Docker utilizará la red principal: [http://host.docker.internal:8545](http://host.docker.internal:8545) para conectarse al nodo Ethereum en su máquina alojada. Puedes reemplazar este nombre de red y url actualizando `docker-compose.yaml`. -#### Setup +#### Configurar -1. Clone Graph Node and navigate to the Docker directory: +1. Clona Graph Node y navega hasta el directorio de Docker: ```sh git clone http://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml`using the included script: +2. Solo para usuarios de Linux: usa la dirección IP del host en lugar de `host.docker.internal` en `docker-compose.yaml`usando el texto incluido: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. Inicia un Graph Node local que se conectará a su punto final de Ethereum: ```sh docker-compose up ``` -### Indexer components +### Componentes de Indexador -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three indexer components: +Para participar con éxito en la red se requiere una supervisión e interacción casi constantes, por lo que hemos creado un conjunto de aplicaciones de Typecript para facilitar la participación de una red de indexadores. Hay tres componentes de indexador: -- **Indexer agent** - The agent monitors the network and the indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards on chain and how much is allocated towards each. +- ** Agente indexador**: el agente monitorea la red y la propia infraestructura del indexador y administra qué implementaciones de subgrafos se indexan y asignan en la cadena y cuánto se asigna a cada uno. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Servicio de indexación**: El único componente que debe exponerse externamente, el servicio transfiere las consultas de subgrafo al graph node, administra los canales de estado para los pagos de consultas, comparte información importante para la toma de decisiones a clientes como las puertas de acceso (gateway). -- **Indexer CLI** - The command line interface for managing the indexer agent. It allows indexers to manage cost models and indexing rules. +- **CLI de Indexador**: La interfaz de línea de comandos para administrar el agente indexador. Permite a los indexadores administrar modelos de costos y reglas de indexación. -#### Getting started +#### Comenzar -The indexer agent and indexer service should be co-located with your Graph Node infrastructure. There are many ways to setup virtual execution environments for you indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://thegraph.com/discord)! Remember to [stake in the protocol](/indexing#stake-in-the-protocol) before starting up your indexer components! +El agente indexador y el servicio indexador deben ubicarse junto con su infraestructura Graph Node. Hay muchas formas de configurar entornos de ejecución virtual para tus componentes de indexador; aquí explicaremos cómo ejecutarlos en baremetal utilizando paquetes o fuente NPM, o mediante kubernetes y docker en Google Cloud Kubernetes Engine. Si estos ejemplos de configuración no se traducen bien en tu infraestructura, es probable que haya una guía de la comunidad de referencia, ¡ven a saludar en [Discord](https://thegraph.com/discord)! Recuerda [stake en el protocolo](/indexing#stake-in-the-protocol) antes de iniciar tus componentes de indexador! ¡Recuerda hacer [staking en el protocolo](/indexing#stake-in-the-protocol) antes de establecer tus componentes como indexer! -#### From NPM packages +#### Paquetes de NPM ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### Fuente ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Uso de Docker -- Pull images from the registry +- Extrae imágenes del registro ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +**NOTA**: Después de iniciar los contenedores, se debe poder acceder al servicio de indexación en [http://localhost:7600](http://localhost:7600) y el agente indexador debería exponer la API de administración del indexador en [ http://localhost:18000/](http://localhost:18000/). ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- Ejecuta los componentes ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the indexer agent should be exposing the indexer management API at [http://localhost:18000/](http://localhost:18000/). +Consulta la sección [Configuración de la infraestructura del servidor con Terraform en Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Using K8s and Terraform +#### Uso de K8s y Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) section +Indexer CLI es un complemento para [ `@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accesible en la terminal en `graph indexer`. -#### Usage +#### Uso -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **NOTA**: Todas las variables de configuración de tiempo de ejecución se pueden aplicar como parámetros al comando en el inicio o usando variables de entorno con el formato `COMPONENT_NAME_VARIABLE_NAME`(ej. `INDEXER_AGENT_ETHEREUM`). -#### Indexer agent +#### Agente Indexador ```sh graph-indexer-agent start \ @@ -487,7 +487,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Servicio de Indexador ```sh SERVER_HOST=localhost \ @@ -515,42 +515,42 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +Indexer CLI es un complemento para [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accesible en la terminal de `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using indexer CLI +#### Gestión del indexador mediante Indexer CLI -The indexer agent needs input from an indexer in order to autonomously interact with the network on the behalf of the indexer. The mechanism for defining indexer agent behavior are the **indexing rules**. Using **indexing rules** an indexer can apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. +El agente indexador necesita información de un indexador para interactuar de forma autónoma con la red en nombre del indexador. El mecanismo para definir el comportamiento del agente indexador son las **reglas de indexación**. Con las **reglas de indexación**, un indexador puede aplicar su estrategia específica para seleccionar subgrafos para indexar y atender consultas. Las reglas se administran a través de una API GraphQL proporcionada por el agente y conocida como API de administración de indexadores (Indexer Management API). La herramienta sugerida para interactuar con la **API de Administración del Indexador** es la **Indexer CLI**, una extensión de **Graph CLI**. -#### Usage +#### Uso -The **Indexer CLI** connects to the indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +La **CLI del Indexador** se conecta al agente del indexador, normalmente a través del reenvío de puertos, por lo que no es necesario que CLI se ejecute en el mismo servidor o clúster. Para ayudarte a comenzar y proporcionar algo de contexto, la CLI se describirá brevemente aquí. -- `graph indexer connect ` - Connect to the indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Conéctate a la API de administración del indexador. Normalmente, la conexión al servidor se abre mediante el reenvío de puertos, por lo que la CLI se puede operar fácilmente de forma remota. (Ejemplo: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the indexer agent. +- `graph indexer rules get [options] ...]` - Obtén una o más reglas de indexación usando `all` `` para obtener todas las reglas, o `global` para obtener los valores globales predeterminados. Se puede usar un argumento adicional `--merged` para especificar que las reglas específicas de implementación se fusionan con la regla global. Así es como se aplican en el agente indexador. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - Establece una o más reglas de indexación. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Empieza a indexar una implementación de subgrafo si está disponible y establece su `decisionBasis` en `always`, por lo que el agente indexador siempre elegirá indexarlo. Si la regla global se establece en siempre, se indexarán todos los subgrafos disponibles en la red. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - Dejq de indexar una implementación y establece tu `decisionBasis` en never (nunca), por lo que omitirá esta implementación cuando decida qué implementaciones indexar. -- `graph indexer rules maybe [options] ` — Set `thedecisionBasis` for a deployment to `rules`, so that the indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` - Configura `thedecisionBasis` para una implementación en `rules`, de modo que el agente indexador use las reglas de indexación para decidir si indexar esta implementación. -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Todos los comandos que muestran reglas en la salida pueden elegir entre los formatos de salida admitidos (`table`, `yaml` y `json`) utilizando `-output` argument. -#### Indexing rules +#### Reglas de Indexación -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Las reglas de indexación se pueden aplicar como valores predeterminados globales o para implementaciones de subgrafos específicos usando sus ID. Los campos `deployment` y `decisionBasis` son obligatorios, mientras que todos los demás campos son opcionales. Cuando una regla de indexación tiene `rules` como `decisionBasis`, el agente indexador comparará los valores de umbral no nulos en esa regla con los valores obtenidos de la red para la implementación correspondiente. Si la implementación del subgrafo tiene valores por encima (o por debajo) de cualquiera de los umbrales, se elegirá para la indexación. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Por ejemplo, si la regla global tiene un `minStake` de **5** (GRT), cualquier implementación de subgrafo que tenga más de 5 (GRT) de participación (stake) asignado a él será indexado. Las reglas de umbral incluyen `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake` y `minAverageQueryFees`. -Data model: +Modelo de Datos: ```graphql type IndexingRule { @@ -573,17 +573,17 @@ IndexingDecisionBasis { } ``` -#### Cost models +#### Modelos de Costos -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make indexer selection decisions per query and to negotiate payment with chosen indexers. +Los modelos de costos proporcionan precios dinámicos para consultas basadas en el mercado y los atributos de la consulta. El Servicio de Indexación comparte un modelo de costos con las puertas de enlace para cada subgrafo para el que pretenden responder a las consultas. Las puertas de enlace, a su vez, utilizan el modelo de costos para tomar decisiones de selección de indexadores por consulta y para negociar el pago con los indexadores elegidos. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +El lenguaje Agora proporciona un formato flexible para declarar modelos de costos para consultas. Un modelo de precios de Agora es una secuencia de declaraciones que se ejecutan en orden para cada consulta de nivel superior en una consulta GraphQL. Para cada consulta de nivel superior, la primera declaración que coincide con ella determina el precio de esa consulta. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Una declaración se compone de un predicado, que se utiliza para hacer coincidir consultas GraphQL, y una expresión de costo que, cuando se evalúa, genera un costo en GRT decimal. Los valores en la posición del argumento nombrado de una consulta pueden capturarse en el predicado y usarse en la expresión. Los globales también se pueden establecer y sustituir por marcadores de posición en una expresión. -Example cost model: +Ejemplo de costos de consultas utilizando el modelo anterior: ``` # This statement captures the skip value, @@ -596,75 +596,75 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Ejemplo de modelo de costo: -| Query | Price | +| Consulta | Precio | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | +| { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT | -#### Applying the cost model +#### Aplicando el modelo de costos -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Los modelos de costos se aplican a través de la CLI de Indexer, que los pasa a la API de Administración de Indexador del agente indexador para almacenarlos en la base de datos. Luego, el Servicio del Indexador los recogerá y entregará los modelos de costos a las puertas de enlace siempre que los soliciten. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Interactuar con la red -### Stake in the protocol +### Participar en el protocolo -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. _ **Note**: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools)._ +Los primeros pasos para participar en la red como Indexador son aprobar el protocolo, stakear fondos y (opcionalmente) configurar una dirección de operador para las interacciones diarias del protocolo. _ **Nota**: A los efectos de estas instrucciones, Remix se utilizará para la interacción del contrato, pero no dudes en utilizar la herramienta que elijas (\[OneClickDapp\](https: // oneclickdapp.com/), [ABItopic](https://abitopic.io/) y [MyCrypto](https://www.mycrypto.com/account) son algunas otras herramientas conocidas)._ -Once an indexer has staked GRT in the protocol, the [indexer components](/indexing#indexer-components) can be started up and begin their interactions with the network. +Después de ser creada por un indexador, una asignación saludable pasa por cuatro estados. -#### Approve tokens +#### Aprobar tokens -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Abre la [aplicación Remix](https://remix.ethereum.org/) en un navegador -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. En el `File Explorer`, crea un archivo llamado **GraphToken.abi** con [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the Deploy and `Run Transactions` section in the Remix interface. +3. Con `GraphToken.abi` seleccionado y abierto en el editor, cambia a la sección Implementar (Deploy) y `Run Transactions` en la interfaz Remix. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. En entorno, selecciona `Injected Web3` y en `Account` selecciona tu dirección de indexador. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Establece la dirección del contrato GraphToken: pega la dirección del contrato GraphToken (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) junto a `At Address` y haz clic en el botón `At address` para aplicar. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Llame a la función `approve(spender, amount)` para aprobar el contrato de Staking. Completa `spender` con la dirección del contrato de Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) y `amount` con los tokens en stake (en wei). -#### Stake tokens +#### Staking de tokens -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Abre la [aplicación Remix](https://remix.ethereum.org/) en un navegador -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. En el `File Explorer`, crea un archivo llamado ** Staking.abi** con la ABI de staking. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy` and `Run Transactions` section in the Remix interface. +3. Con `Staking.abi` seleccionado y abierto en el editor, cambia a la sección `Deploy` y `Run Transactions` en la interfaz Remix. -4. Under environment select `Injected Web3` and under `Account` select your indexer address. +4. En entorno, selecciona `Injected Web3` y en `Account` selecciona tu dirección de indexador. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Establece la dirección del contrato de staking - Pega la dirección del contrato de Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) junto a `At Address` y haz clic en el botón `At address` para aplicar. -6. Call `stake()` to stake GRT in the protocol. +6. Llama a `stake()` para bloquear GRT en el protocolo. -7. (Optional) Indexers may approve another address to be the operator for their indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Opcional) Los indexadores pueden aprobar otra dirección para que sea el operador de su infraestructura de indexación a fin de separar las claves que controlan los fondos de las que realizan acciones cotidianas, como la asignación en subgrafos y el servicio de consultas (pagadas). Para configurar el operador, llama a `setOperator()` con la dirección del operador. -8. (Optional) In order to control the distribution of rewards and strategically attract delegators indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the indexer and 5% to delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the indexer and 40% to delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Opcional) Para controlar la distribución de recompensas y atraer estratégicamente a los delegadores, los indexadores pueden actualizar sus parámetros de delegación actualizando su indexingRewardCut (partes por millón), queryFeeCut (partes por millón) y cooldownBlocks (número de bloques). Para hacerlo, llama a `setDelegationParameters()`. El siguiente ejemplo establece queryFeeCut para distribuir el 95% de los reembolsos de consultas al indexador y el 5% a los delegadores, establece indexingRewardCut para distribuir el 60% de las recompensas de indexación al indexador y el 40% a los delegadores, y establece `thecooldownBlocks` Periodo a 500 bloques. ``` setDelegationParameters(950000, 600000, 500) ``` -### The life of an allocation +### La vida de una asignación -After being created by an indexer a healthy allocation goes through four states. +Después de ser creada por un indexador, una asignación saludable pasa por cuatro fases. -- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The indexer agent manages creating allocations based on the indexer rules. +- **Activo**: Una vez que se crea una asignación en la cadena (\[allocateFrom()\](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol # L873)) se considera **activo**. Una parte de la participación propia y/o delegada del indexador se asigna a una implementación de subgrafo, lo que le permite reclamar recompensas de indexación y atender consultas para esa implementación de subgrafo. El agente indexador gestiona la creación de asignaciones basadas en las reglas del indexador. -- **Closed** - An indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the indexer and its delegators (see "how are rewards distributed?" below to learn more). +- **Cerrado**: Un indexador puede cerrar una asignación una vez que haya pasado 1 ciclo ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) o su agente indexador cerrará automáticamente la asignación después de **maxAllocationEpochs** (actualmente 28 días). Cuando una asignación se cierra con una prueba válida de indexación (POI), sus recompensas de indexación se distribuyen al indexador y sus delegadores (consulta "¿Cómo se distribuyen las recompensas?" A continuación para obtener más información). -- **Finalized** - Once an allocation has been closed there is a dispute period after which the allocation is considered **finalized** and it's query fee rebates are available to be claimed (claim()). The indexer agent monitors the network to detect **finalized** allocations and claims them if they are above a configurable (and optional) threshold, **—-allocation-claim-threshold**. +- **Finalizada**: Una vez que se ha cerrado una asignación, hay un período de disputa después del cual la asignación se considera **finalizada** y los reembolsos de tarifas de consulta están disponibles para ser reclamados (claim()). El agente indexador supervisa la red para detectar asignaciones ** finalizadas** y las reclama si están por encima de un umbral configurable (y opcional), ** - -allocation-claim-threshold**. -- **Claimed** - The final state of an allocation; it has run its course as an active allocation, all eligible rewards have been distributed and its query fee rebates have been claimed. +- **Reclamado**: El estado final de una asignación; ha seguido su curso como una asignación activa, se han distribuido todas las recompensas elegibles y se han reclamado los reembolsos de las tarifas de consulta. From e6324cbb074b217a580d26387a357219ef905334 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:39 -0500 Subject: [PATCH 224/241] New translations curating.mdx (Arabic) --- pages/ar/curating.mdx | 104 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/pages/ar/curating.mdx b/pages/ar/curating.mdx index 7f542ca5ebc8..6e37a8776a6f 100644 --- a/pages/ar/curating.mdx +++ b/pages/ar/curating.mdx @@ -2,102 +2,102 @@ title: (التنسيق) curating --- -Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. +المنسقون مهمون للاقتصاد اللامركزي في the Graph. يستخدمون معرفتهم بالنظام البيئي web3 للتقييم والإشارة ل Subgraphs والتي تفهرس بواسطة شبكة The Graph. من خلال المستكشف (Explorer)، يستطيع المنسقون (curators) عرض بيانات الشبكة وذلك لاتخاذ قرارات الإشارة. تقوم شبكة The Graph بمكافئة المنسقين الذين يشيرون إلى ال Subgraphs عالية الجودة بحصة من رسوم الاستعلام التي تولدها ال subgraphs. يتم تحفيز المنسقون(Curators) ليقومون بالإشارة بشكل مبكر. هذه الإشارات من المنسقين مهمة للمفهرسين ، والذين يمكنهم بعد ذلك معالجة أو فهرسة البيانات من ال subgraphs المشار إليها. -When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. +يمكن للمنسقين اتخاذ القرار إما بالإشارة إلى إصدار معين من Subgraphs أو الإشارة باستخدام الترحيل التلقائي auto-migrate. عند الإشارة باستخدام الترحيل التلقائي ، ستتم دائما ترقية حصص المنسق إلى أحدث إصدار ينشره المطور. وإذا قررت الإشارة إلى إصدار معين، فستظل الحصص دائما في هذا الإصدار المحدد. -Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) +تذكر أن عملية التنسيق محفوفة بالمخاطر. نتمنى أن تبذل قصارى جهدك وذلك لتنسق ال Subgraphs الموثوقة. إنشاء ال subgraphs لا يحتاج إلى ترخيص، لذلك يمكن للأشخاص إنشاء subgraphs وتسميتها بأي اسم يرغبون فيه. لمزيد من الإرشادات حول مخاطر التنسيق ، تحقق من[The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) -## Bonding Curve 101 +## منحنى الترابط 101 -First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. +أولا لنعد خطوة إلى الوراء. يحتوي كل subgraphs على منحنى ربط يتم فيه صك حصص التنسيق ، وذلك عندما يضيف المستخدم إشارة **للمنحنى**. لكل Subgraphs منحنى ترابط فريد من نوعه. يتم تصميم منحنيات الترابط بحيث يزداد بشكل ثابت سعر صك حصة التنسيق على Subgraphs ، وذلك مقارنة بعدد الحصص التي تم صكها. -![Price per shares](/img/price-per-share.png) +![سعر السهم](/img/price-per-share.png) -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: +نتيجة لذلك ، يرتفع السعر بثبات ، مما يعني أنه سيكون شراء السهم أكثر تكلفة مع مرور الوقت. فيما يلي مثال لما نعنيه ، راجع منحنى الترابط أدناه: -![Bonding curve](/img/bonding-curve.png) +![منحنى الترابط Bonding curve](/img/bonding-curve.png) -Consider we have two curators that mint shares for a subgraph: +ضع في اعتبارك أن لدينا منسقان يشتركان في Subgraph واحد: -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. +- المنسق (أ) هو أول من أشار إلى ال Subgraphs. من خلال إضافة 120000 GRT إلى المنحنى ، سيكون من الممكن صك 2000 سهم. +- تظهر إشارة المنسق "ب" على ال Subgraph لاحقا. للحصول على نفس كمية حصص المنسق "أ" ، يجب إضافة 360000 GRT للمنحنى. +- لأن كلا من المنسقين يحتفظان بنصف إجمالي اسهم التنسيق ، فإنهم سيحصلان على قدر متساوي من عوائد المنسقين. +- إذا قام أي من المنسقين بحرق 2000 من حصص التنسيق الخاصة بهم ،فإنهم سيحصلون على 360000 GRT. +- سيحصل المنسق المتبقي على جميع عوائد المنسق لهذ ال subgraphs. وإذا قام بحرق حصته للحصول علىGRT ، فإنه سيحصل على 120.000 GRT. +- ** TLDR: ** يكون تقييم أسهم تنسيق GRT من خلال منحنى الترابط ويمكن أن يكون متقلبا. هناك إمكانية لتكبد خسائر كبيرة. الإشارة في وقت مبكر يعني أنك تضع كمية أقل من GRT لكل سهم. هذا يعني أنك تكسب من عائدات المنسق لكل GRT أكثر من المنسقين المتأخرين لنفس ال subgraph. -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** +بشكل عام ، منحنى الترابط هو منحنى رياضي يحدد العلاقة بين عرض التوكن وسعر الأصول. في الحالة المحددة لتنسيق ال subgraph ، ** يرتفع سعر كل سهم في ال subgraph مع كل توكن مستثمر ** ويقل السعر \*\* لكل سهم مع كل بيع للتوكن. -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. +في حالة The Graph -## How to Signal +## كيفية الإشارة -Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) +الآن بعد أن غطينا الأساسيات حول كيفية عمل منحنى الترابط ،طريقة الإشارة على ال subgraph هي كالتالي. ضمن علامة التبويب "Curator" في "Graph Explorer" ، سيتمكن المنسقون من الإشارة وإلغاء الإشارة إلى بعض ال subgraphs بناء على إحصائيات الشبكة. للحصول على نظرة عامة خطوة بخطوة حول كيفية القيام بذلك في Explorer ،[انقر هنا](https://thegraph.com/docs/explorer) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +يمكن للمنسق الإشارة إلى إصدار معين ل subgraph ، أو يمكنه اختيار أن يتم ترحيل migrate إشاراتهم تلقائيا إلى أحدث إصدار لهذا ال subgraph. كلاهما استراتيجيات سليمة ولها إيجابيات وسلبيات. -Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +الإشارة إلى إصدار معين مفيدة بشكل خاص عند استخدام subgraph واحد بواسطة عدة dapps. قد يحتاج ال dapp إلى تحديث ال subgraph بانتظام بميزات جديدة. وقد يفضل dapp آخر استخدام إصدار subgraph أقدم تم اختباره جيدا. عند بداية التنسيق ، يتم فرض ضريبة بنسبة 1٪. -Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. +يمكن أن يكون ترحيل migration الإشارة تلقائيا إلى أحدث إصدار أمرا ذا قيمة لضمان استمرار تراكم رسوم الاستعلام. في كل مرة تقوم فيها بالتنسيق ، يتم فرض ضريبة تنسيق بنسبة 1٪. ستدفع أيضًا ضريبة تنسيق 0.5٪ على كل ترحيل. لا يُنصح مطورو ال Subgraph بنشر إصدارات جديدة بشكل متكرر - يتعين عليهم دفع ضريبة تنسيق بنسبة 0.5٪ على جميع أسهم التنسيق المرحلة تلقائيًا. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. +> ** ملاحظة **: العنوان الأول الذي يشير ل subgraph معين يعتبر هو المنسق الأول وسيتعين عليه القيام بأعمال gas أكثر بكثير من بقية المنسقين التاليين لأن المنسق الأول يهيئ توكن أسهم التنسيق، ويهيئ منحنى الترابط ، وكذلك ينقل التوكن إلى the Graph proxy. -## What does Signaling mean for The Graph Network? +## ماذا تعني الإشارة لشبكة The Graph؟ -For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. +لكي يتمكن المستهلك من الاستعلام عن subgraph ، يجب أولا فهرسة ال subgraph. الفهرسة هي عملية يتم فيها النظر إلى الملفات، والبيانات، والبيانات الوصفية وفهرستها بحيث يمكن العثور على النتائج بشكل أسرع. يجب تنظيم بيانات ال subgraph لتكون قابلة للبحث فيها. -And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. +وبالتالي ، إذا قام المفهرسون بتخمين ال subgraphs التي يجب عليهم فهرستها ، فستكون هناك فرصة منخفضة في أن يكسبوا رسوم استعلام جيدة لأنه لن يكون لديهم طريقة للتحقق من ال subgraphs ذات الجودة العالية. أدخل التنسيق. -Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! +المنسقون بجعلون شبكة The Graph فعالة، والتأشير signaling هي العملية التي يستخدمها المنسقون لإعلام المفهرسين بأن ال subgraph جيدة للفهرسة ، حيث تتم إضافة GRT إلى منحنى الترابط ل subgraph. يمكن للمفهرسين أن يثقوا بإشارة المنسق لأنه عند الإشارة ، يقوم المنسقون بصك سهم تنسيق ال subgraph ، مما يمنحهم حق الحصول على جزء من رسوم الاستعلام المستقبلية التي ينشئها ال subgraph. إشارة المنسق يتم تمثيلها كتوكن ERC20 والتي تسمى (Graph Curation Shares (GCS. المنسقين الراغبين في كسب المزيد من رسوم الاستعلام عليهم إرسال الإشارة بـGRT إلى الـ subgraphs التي يتوقعون أنها ستولد تدفقا قويا للرسوم للشبكة.هناك ضريبة ودائع على المنسقين لتثبيط اتخاذ قرار يمكن أن يضر بسلامة الشبكة. يكسب المنسقون أيضا رسوم استعلام أقل إذا اختاروا التنسيق على subgraph منخفض الجودة ، حيث سيكون هناك عددا أقل من الاستعلامات لمعالجتها أو عددا أقل من المفهرسين لمعالجة هذه الاستعلامات. انظر إلى الرسم البياني أدناه! -![Signaling diagram](/img/curator-signaling.png) +![مخطط التأشير Signaling diagram](/img/curator-signaling.png) -Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). +يمكن للمفهرسين العثور على subgraphs لفهرستها وذلك بناء على إشارات التنسيق التي يرونها في The Graph Explorer (لقطة الشاشة أدناه). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![مستكشف subgraphs](/img/explorer-subgraphs.png) -## Risks +## المخاطر -1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة. +2. رسوم التنسيق - عندما يشير المنسق إلى GRT على subgraph ، فإنه يتحمل ضريبة تنسيق بنسبة 1٪. يتم حرق هذه الرسوم ويودع الباقي في العرض الاحتياطي لمنحنى الترابط. +3. عندما يحرق المنسقون أسهمهم لسحب GRT ، سينخفض تقييم GRT للأسهم المتبقية. كن على علم بأنه في بعض الحالات ، قد يقرر المنسقون حرق أسهمهم ** كلها مرة واحدة **. قد تكون هذه الحالة شائعة إذا توقف مطور dapp عن الاصدار/ التحسين والاستعلام عن ال subgraph الخاص به أو في حالة فشل ال subgraph. نتيجة لذلك ، قد يتمكن المنسقون المتبقون فقط من سحب جزء من GRT الأولية الخاصة بهم. لدور الشبكة بمخاطر أقل انظر\[Delegators\] (https://thegraph.com/docs/delegating). +4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا. + - إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪. + - إذا أشرت إلى إصدار معين من subgraph وفشل ، فسيتعين عليك حرق أسهم التنسق الخاصة بك يدويا. لاحظ أنك قد تتلقى GRT أكثر أو أقل مما أودعته في البداية في منحنى التنسيق، وهي مخاطرة مرتبطة بكونك منسقا. You can then signal on the new subgraph version, thus incurring a 1% curation tax. -## Curation FAQs +## الأسئلة الشائعة حول التنسيق -### 1. What % of query fees do Curators earn? +### 1. ما هي النسبة المئوية لرسوم الاستعلام التي يكسبها المنسقون؟ -By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. +من خلال الإشارة لل subgraph ، سوف تكسب حصة من جميع رسوم الاستعلام التي يولدها هذا ال subgraph. تذهب 10٪ من جميع رسوم الاستعلام إلى المنسقين بالتناسب مع أسهم التنسيق الخاصة بهم. هذه الـ 10٪ خاضعة للقوانين. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. كيف يمكنني تقرير ما إذا كان ال subgraph عالي الجودة لكي أقوم بالإشارة إليه؟ -Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +يعد العثور على ال subgraphs عالية الجودة مهمة معقدة ، ولكن يمكن التعامل معها بعدة طرق مختلفة. بصفتك منسقا، فأنت تريد البحث عن ال subgraphs الموثوقة والتي تؤدي إلى زيادة حجم الاستعلام. ال subgraph الجدير بالثقة يكون ذا قيمة إذا كان مكتملا ودقيقا ويدعم احتياجات بيانات ال dapp. قد يحتاج ال subgraph الذي تم تكوينه بشكل سيئ إلى المراجعة أو إعادة النشر ، وقد ينتهي به الأمر أيضًا إلى الفشل. من المهم للمنسقين القيام بمراجعة بنية أو كود ال subgraph من أجل تقييم ما إذا كان ال subgraph ذو قيمة أم لا. كنتيجة ل: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- يمكن للمنسقين استخدام فهمهم للشبكة لمحاولة التنبؤ كيف لل subgraph أن يولد حجم استعلام أعلى أو أقل في المستقبل +- يجب أن يفهم المنسقون أيضا المقاييس المتوفرة من خلال the Graph Explorer. المقاييس مثل حجم الاستعلام السابق ومن هو مطور ال subgraph تساعد في تحديد ما إذا كان ال subgraph يستحق الإشارة إليه أم لا. -### 3. What’s the cost of upgrading a subgraph? +### 3. ما هي تكلفة ترقية ال subgraph؟ -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. +ترحيل أسهم التنسيق الخاصة بك إلى إصدار subgraph جديد يؤدي إلى فرض ضريبة تنسيق بنسبة 1٪. يمكن للمنسقين الاشتراك في أحدث إصدار من ال subgraph. عندما يتم ترحيل أسهم المنسقين تلقائيا إلى إصدار جديد ، سيدفع المنسقون أيضا نصف ضريبة التنسيق ، أي. 0.5٪ ، لأن ترقية ال subgraphs هي إجراء متسلسل يكلف غاز gas. -### 4. How often can I upgrade my subgraph? +### 4. كم مرة يمكنني ترقية ال subgraph الخاص بي؟ -It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. +يفضل عدم ترقية ال subgraphs بشكل متكرر. ارجع للسؤال أعلاه لمزيد من التفاصيل. -### 5. Can I sell my curation shares? +### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟ -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. +لا يمكن "شراء" أو "بيع" أسهم التنسيق مثل توكنات ERC20 الأخرى التي قد تكون على دراية بها. يمكن فقط صكها (إنشاؤها) أو حرقها (إتلافها) على طول منحنى الترابط ل subgraph معين. من خلال منحنى الترابط يتم تحديد مقدار GRT اللازمة لصك إشارة جديدة ، وكمية GRT التي تتلقاها عندما تحرق إشارتك الحالية. بصفتك منسقا، عليك أن تعرف أنه عندما تقوم بحرق أسهم التنسيق الخاصة بك لسحب GRT ، فيمكن أن ينتهي بك الأمر ب GRT أكثر أو أقل مما قمت بإيداعه في البداية. -Still confused? Check out our Curation video guide below: +لازلت مشوشا؟ راجع فيديو دليل التنسيق أدناه:
From 60c5fe5643405ad714f50cd8030c073d9ad91ff6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:40 -0500 Subject: [PATCH 225/241] New translations delegating.mdx (Japanese) --- pages/ja/delegating.mdx | 81 ++++++++++++++++++++--------------------- 1 file changed, 40 insertions(+), 41 deletions(-) diff --git a/pages/ja/delegating.mdx b/pages/ja/delegating.mdx index eb058d946234..06c1297a5a4a 100644 --- a/pages/ja/delegating.mdx +++ b/pages/ja/delegating.mdx @@ -2,92 +2,91 @@ title: デリゲーティング --- -Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. +デリゲーターは悪意の行動をしてもスラッシュされないが、デリゲーターにはデポジット税が課せられ、ネットワークの整合性を損なう可能性のある悪い意思決定を抑止します。 -## Delegator Guide +## デリゲーターガイド -This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: +このガイドでは、グラフネットワークで効果的なデリゲーターになるための方法を説明します。 デリゲーターは、デリゲートされたステークのすべてのインデクサーとともにプロトコルの収益を共有します。 デリゲーターは、複数の要素を考慮した上で、最善の判断でインデクサーを選ばなければなりません。 このガイドでは、メタマスクの適切な設定方法などについては説明しません。このガイドには3つのセクションがあります。 There are three sections in this guide: -- The risks of delegating tokens in The Graph Network -- How to calculate expected returns as a delegator -- A Video guide showing the steps to delegate in the Graph Network UI +- グラフネットワークでトークンをデリゲートすることのリスク +- デリゲーターとしての期待リターンの計算方法 +- グラフネットワークの UI でデリゲートする手順のビデオガイド -## Delegation Risks +## デリゲーションリスク -Listed below are the main risks of being a delegator in the protocol. +以下に、本プロトコルでデリゲーターとなる場合の主なリスクを挙げます。 -### The delegation fee +### デリゲーション手数料 -It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. +デリゲートするたびに、0.5%の手数料が発生します。 つまり、1000GRT を委任する場合は、自動的に 5GRT が消費されます。 -This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. +つまり、安全のために、デリゲーターはインデクサーにデリゲートした場合のリターンを計算しておく必要があります。 例えば、デリゲーターは、自分のデリゲートに対する 0.5%のデポジット税を取り戻すのに何日かかるかを計算するとよいでしょう。 -### The delegation unbonding period +### デリゲーションのアンボンディング期間 -Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. +デリゲーターが、デリゲーションを解除しようとすると、そのトークンは 28 日間のアンボンディング期間が設けられます。 つまり、28 日間はトークンの譲渡や報酬の獲得ができません。 -One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT. +考慮すべき点は、インデクサーを賢く選ぶことです。 信頼できない、あるいは良い仕事をしていないインデクサーを選んだ場合、アンデリゲートしたくなるでしょう。 つまり、報酬を獲得する機会を大幅に失うことになり、GRT をバーンするのと同じくらいの負担となります。
- ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day - unbonding period._ + デリゲーション UIの0.5%の手数料と、28日間のアンボンディング期間に注目してください。
-### Choosing a trustworthy indexer with a fair reward payout for delegators +### デリゲーターに公平な報酬を支払う信頼できるインデクサーの選択 -This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. +これは理解すべき重要な部分です。 まず、デリゲーションパラメータである 3 つの非常に重要な値について説明します。 -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards. +インデキシング報酬カット - インデキシング報酬カットは、インデクサーが自分のために保持する報酬の部分です。 つまり、これが 100%に設定されていると、デリゲーターであるあなたは 0 のインデキシング報酬を得ることになります。 UI に 80%と表示されている場合は、デリゲーターとして 20%を受け取ることになります。 重要な注意点として、ネットワークの初期段階では、インデキシング報酬が報酬の大半を占めます。
- ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The + トップのインデクサーは、デリゲーターに90%の報酬を与えています。 The middle one is giving delegators 20%. The bottom one is giving delegators ~83%.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. +- クエリーフィーカット - これはインデキシングリワードカットと全く同じ働きをします。 しかし、これは特に、インデクサーが収集したクエリフィーに対するリターンを対象としています。 ネットワークの初期段階では、クエリフィーからのリターンは、インデキシング報酬に比べて非常に小さいことに注意する必要があります。 ネットワーク内のクエリフィーがいつから大きくなり始めるのかを判断するために、ネットワークに注意を払うことをお勧めします。 -As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. +このように、適切なインデクサーを選択するためには、多くのことを考えなければなりません。 だからこそ、The Graph の Discord をリサーチして、社会的評価や技術的評価が高く、デリゲーターに安定して報酬を与えることができるインデクサーが誰なのかを見極めることを強くお勧めします。 多くのインデクサーは Discord で活発に活動しており、あなたの質問に喜んで答えてくれるでしょう。 彼らの多くはテストネットで何ヶ月もインデックスを作成しており、ネットワークの健全性と成功を向上させるために、デリゲーターが良いリターンを得られるように最善を尽くしています。 -### Calculating delegators expected return +### デリゲーターの期待リターンを計算 -A Delegator has to consider a lot of factors when determining the return. These +デリゲーターはリターンを決定する際に、多くの要素を考慮しなければなりません。 以下のとおりです: -- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. +- デリゲーターは、インデクサーが利用可能なデリゲートトークンを使用する能力にも目を向けることができます。 もしインデクサーが利用可能なトークンをすべて割り当てていなければ、彼らは自分自身やデリゲーターのために得られる最大の利益を得られないことになります。 +- 現在のネットワークでは、インデクサーは 1 日から 28 日の間であればいつでも割り当てを終了して報酬を受け取ることができます。 そのため、インデクサーがまだ回収していない報酬をたくさん抱えている可能性があり、その結果、報酬の総額が少なくなっています。 これは初期の段階で考慮しておく必要があります。 -### Considering the query fee cut and indexing fee cut +### クエリフィーのカットとインデックスフィーのカットの検討 -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: +上記のセクションで説明したように、問い合わせ手数料カットとインデクシングフィーのカット設定について透明性が高く、誠実なインデクサーを選ぶべきです。 デリゲーターは、Parameters Cooldown の時間を見て、どれだけの時間的余裕があるかを確認する必要があります。 その後、デリゲーターが得ている報酬の額を計算するのはとても簡単です。 その式は以下のとおりです: -![Delegation Image 3](/img/Delegation-Reward-Formula.png) +![インデキシング リワードカット](/img/Delegation-Reward-Formula.png) -### Considering the indexers delegation pool +### インデクサーのデリゲーションプールを考慮する -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: +デリゲーターが考慮しなければならないもう一つのことは、デリゲーションプールのどの割合を所有しているかということです。 全てのデリゲーション報酬は均等に分配され、デリゲーターがプールに入金した金額によって決まるプールの簡単なリバランスが行われます。 これにより、デリゲーターはプールのシェアを得ることができます。 -![Share formula](/img/Share-Forumla.png) +![シェアの計算式](/img/Share-Forumla.png) -Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. +したがって、デリゲーターは計算して、デリゲーターに 20%を提供しているインデクサーの方が、より良いリターンを提供していると判断することができます。 -A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. +そのため、デリゲーターは、デリゲーターに20%を提供しているインデクサーの方が、より良いリターンを提供していると判断して計算することができます。 -### Considering the delegation capacity +### デリゲーション能力を考慮する -Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +もうひとつ考慮しなければならないのが、デリゲーション能力です。 現在、デリゲーションレシオは 16 に設定されています。 これは、インデクサーが 1,000,000GRT をステークしている場合、そのデリゲーション容量はプロトコルで使用できる 16,000,000GRT のデリゲーショントークンであることを意味します。 この量を超えるデリゲートされたトークンは、全てのデリゲーター報酬を薄めてしまいます。 -Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. +あるインデクサーが 100,000,000 GRT をデリゲートされていて、その容量が 16,000,000 GRT しかないと想像してみてください。 これは事実上、84,000,000 GRT トークンがトークンの獲得に使われていないことを意味します。 そして、すべてのデリゲーターとインデクサーは、本来得られるはずの報酬よりもずっと少ない報酬しか得られていません。 -Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. +この式を使うと、デリゲーターに 20%しか提供していないインデクサーが、デリゲーターに 90%を提供しているインデクサーよりも、デリゲーターにさらに良い報酬を与えている可能性があることがわかります。 -## Video guide for the network UI +## ネットワーク UI のビデオガイド -This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI. +この式を使うと、デリゲーターに 20%しか提供していないインデクサーが、デリゲーターに 90%を提供しているインデクサーよりも、デリゲーターにさらに良い報酬を与えている可能性があることがわかります。
From c1de4afbaff2f10166439712b5602e828d355a04 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:42 -0500 Subject: [PATCH 226/241] New translations curating.mdx (Japanese) --- pages/ja/curating.mdx | 104 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/pages/ja/curating.mdx b/pages/ja/curating.mdx index 2b526405fd98..d4e44811fbcf 100644 --- a/pages/ja/curating.mdx +++ b/pages/ja/curating.mdx @@ -2,102 +2,102 @@ title: キューレーティング --- -Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. +キュレーターは、グラフの分散型経済にとって重要な存在です。 キューレーターは、web3 のエコシステムに関する知識を用いて、The Graph Network がインデックスを付けるべきサブグラフを評価し、シグナルを送ります。 キュレーターは Explorer を通じてネットワークのデータを見て、シグナルを出す判断をすることができます。 The Graph Network は、良質なサブグラフにシグナルを送ったキュレーターに、サブグラフが生み出すクエリフィーのシェアを与えます。 キュレーターには、早期にシグナルを送るという経済的なインセンティブが働きます。 キュレーターからのシグナルはインデクサーにとって非常に重要で、インデクサーはシグナルを受けたサブグラフからデータを処理したり、インデックスを作成したりすることができます。 -When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. +シグナリングの際、キュレーターはサブグラフの特定のバージョンでシグナリングするか、auto-migrate を使ってシグナリングするかを決めることができます。 Auto-migrate を使ってシグナリングすると、キュレーターのシェアは常に開発者が公開した最新バージョンにアップグレードされます。 代わりに特定のバージョンでシグナルを送ることにした場合、シェアは常にその特定のバージョンのままとなります。 -Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) +キュレーションはリスクを伴うことを忘れないでください。 そして、信頼できるサブグラフでキュレーションを行うよう、十分に注意してください。 サブグラフの作成はパーミッションレスであり、人々はサブグラフを作成し、好きな名前をつけることができます。 キュレーションのリスクについての詳しいガイダンスは、 [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) をご覧ください。 -## Bonding Curve 101 +## ボンディングカーブ 101 -First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. +順を追ってみていきましょう。 まず、各サブグラフにはボンディングカーブがあり、ユーザーがその曲線(カーブ)にシグナルを加えると、キュレーション・シェアが形成されます。 各サブグラフのボンディングカーブはユニークです。 ボンディングカーブは、サブグラフ上でキュレーション・シェアをミントするための価格が、ミントされるシェアの数に応じて直線的に増加するように設計されています。 -![Price per shares](/img/price-per-share.png) +![シェアあたりの価格](/img/price-per-share.png) -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: +その結果、価格は直線的に上昇し、時間の経過とともにシェアの購入価格が高くなることを意味しています。 下のボンディングカーブを見て、その例を示します: -![Bonding curve](/img/bonding-curve.png) +![ボンディングカーブ](/img/bonding-curve.png) -Consider we have two curators that mint shares for a subgraph: +あるサブグラフのシェアを作成する 2 人のキュレーターがいるとします。 -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. +- キュレーター A は、サブグラフに最初にシグナルを送ります。 120,000GRT をボンディングカーブに加えることで、2000 もシェアをミントすることができます。 +- キュレーター B のシグナルは、後のある時点でサブグラフに表示されます。 キュレーター A と同じ量のシェアを受け取るためには、360,000GRT を曲線に加える必要があります。 +- 両方のキュレーターがキュレーションシェアの合計の半分を保有しているので、彼らは同額のキュレーターロイヤルティを受け取ることになります。 +- もし、キュレーターの誰かが 2000 のキュレーションシェアをバーンした場合、360,000GRT を受け取ることになります。 +- 残りのキュレーターは、そのサブグラフのキュレーター・ロイヤリティーをすべて受け取ることになります。 もし彼らが自分のシェアをバーンして GRT を引き出す場合、彼らは 120,000GRT を受け取ることになります。 +- **TLDR:** キュレーションシェアの GRT 評価はボンディングカーブによって決まるため、変動しやすいという傾向があります。 また、大きな損失を被る可能性があります。 早期にシグナリングするということは、1 つのシェアに対してより少ない GRT を投入することを意味します。 ひいては、同じサブグラフの後続のキュレーターよりも、GRT あたりのキュレーター・ロイヤリティーを多く得られることになります。 -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** +一般的にボンディングカーブとは、トークンの供給量と資産価格の関係を定義する数学的な曲線のことです。 サブグラフのキュレーションという具体的なケースでは、サブグラフの各シェアの価格は、投資されたトークンごとに上昇し、販売されたトークンごとに減少します。 -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. +The Graph の場合は、 [Bancor が実装しているボンディングカーブ式](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) を活用しています。 -## How to Signal +## シグナルの出し方 -Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) +ボンディングカーブの仕組みについて基本的なことを説明しましたが、ここではサブグラフにシグナルを送る方法を説明します。 グラフ・エクスプローラーの「キュレーター」タブ内で、キュレーターはネットワーク・スタッツに基づいて特定のサブグラフにシグナルを送ることができるようになります。 エクスプローラーでの操作方法の概要はこちらをご覧ください。 -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +キュレーターは、特定のサブグラフのバージョンでシグナルを出すことも、そのサブグラフの最新のプロダクションビルドに自動的にシグナルを移行させることも可能ですます。 どちらも有効な戦略であり、それぞれに長所と短所があります。 -Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +特定のバージョンでのシグナリングは、1 つのサブグラフを複数の dapps が使用する場合に特に有効です。 ある DAP は、サブグラフを定期的に新機能で更新する必要があるかもしれません。 別のアプリは、古くても、よくテストされたサブグラフのバージョンを使用することを好むかもしれません。 初回キュレーション時には、1%の標準税が発生します。 -Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. +シグナルを最新のプロダクションビルドに自動的に移行させることは、クエリー料金の発生を確実にするために有効です。 キュレーションを行うたびに、1%のキュレーション税が発生します。 また、移行ごとに 0.5%のキュレーション税を支払うことになります。 つまり、サブグラフの開発者が、頻繁に新バージョンを公開することは推奨されません。 自動移行された全てのキュレーションシェアに対して、0.5%のキュレーション税を支払わなければならないからです。 -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. +> 注:特定のサブグラフにシグナルを送る最初のアドレスは、最初のキュレーターとみなされ、後続のキュレーターよりもはるかに多くのガスを消費する仕事をしなければなりません。 最初のキュレーターは、キュレーションシェアのトークンを初期化し、ボンディングカーブを初期化し、トークンをグラフのプロキシに転送するからです。 -## What does Signaling mean for The Graph Network? +## グラフネットワークにとってのシグナリングとは? -For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. +最終的な消費者がサブグラフをクエリできるようにするためには、まずサブグラフにインデックスを付ける必要があります。 インデックス化(インデクシング)とは、ファイルやデータ、メタデータを調べ、カタログ化し、結果をより早く見つけられるようにするための作業です。 サブグラフのデータを検索可能にするためには、データを整理する必要があります。 -And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. +そのため、インデクサーがどのサブグラフをインデックスすべきかを推測しなければならない場合、どのサブグラフが良質であるかを検証する方法がないため、しっかりとしたクエリフィーを得られる可能性は低くなります。 そこでキュレーションの出番です。 -Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! +キュレーターは The Graph ネットワークを効率化する存在であり、シグナリングとは、キュレーターがインデクサーにサブグラフのインデックスの作成に適していることを知らせるためのプロセスです。 シグナリングによりキュレータはサブグラフのキュレーションシェアを獲得し、サブグラフが駆動する将来のクエリフィーの一部を受け取る権利を得るため、インデクサーはキュレータからのシグナルを本質的に信頼することができます。 キュレーターのシグナルは、Graph Curation Shares (GCS) と呼ばれる ERC20 トークンで表されます。 より多くのクエリーフィーを獲得したいキュレーターは、ネットワークへの強いフィーの流れを生み出すと予測されるサブグラフに GRT をシグナルするべきであるといえます。 キュレーターはスラッシュされることはありませんが、ネットワークの整合性を損なう可能性のある不適切な意思決定を阻害するために、キュレーターにはデポジット税が課せられます。 また、キュレーターは、質の低いサブグラフでキュレーションを行うことを選択した場合、処理すべきクエリ数や、それらのクエリを処理するインデクサー数が少なくなるため、少ないクエリ手数料しか得られなくなります。 下の図をご覧ください。 -![Signaling diagram](/img/curator-signaling.png) +![シグナリング ダイアグラム](/img/curator-signaling.png) -Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). +インデクサーは、「グラフ・エクスプローラー」で確認したキュレーション・シグナルに基づいて、インデックスを作成するサブグラフを見つけることができます。 -![Explorer subgraphs](/img/explorer-subgraphs.png) +![エクスプローラー サブグラフ](/img/explorer-subgraphs.png) -## Risks +## リスク -1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. The Graph では、クエリ市場は本質的に歴史が浅く、初期の市場ダイナミクスのために、あなたの%APY が予想より低くなるリスクがあります。 +2. キュレーション料 - キュレーターがサブグラフ上で GRT をシグナルすると、1%のキュレーション税が発生します。 この手数料はバーンされ、残りはボンディングカーブのリザーブサプライに預けられます。 +3. キュレーターが GRT を引き出すためにシェアをバーンすると、残りのシェアの GRT 評価額が下がります。 場合によっては、キュレーターが自分のシェアを一度にバーンすることを決めることがあるので注意が必要です。 このような状況は、dapp 開発者がサブグラフのバージョン管理や改良、クエリをやめた場合や、サブグラフが故障した場合によく見られます。 その結果、残ったキュレーターは当初の GRT の何分の一かしか引き出せないかもしれません。 リスクプロファイルの低いネットワークロールについては、\[Delegators\](https://thegraph.com/docs/delegating)を参照してください。 +4. サブグラフはバグで失敗することがあります。 失敗したサブグラフは、クエリフィーが発生しません。 結果的に、開発者がバグを修正して新しいバージョンを展開するまで待たなければならなくなります。 + - サブグラフの最新バージョンに加入している場合、シェアはその新バージョンに自動移行します。 これには0.5%のキュレーション税がかかります。 + - 特定のサブグラフのバージョンでシグナリングしていて、それが失敗した場合は、手動でキュレーションシャイアをバーンする必要があります。 キュレーション・カーブに最初に預けた金額よりも多く、または少なく GRT を受け取る可能性があることに注意してください。 これはキュレーターとしてのリスクです。 そして、新しいサブグラフのバージョンにシグナルを送ることができ、1%のキュレーション税が発生します。 -## Curation FAQs +## キューレーション FAQ -### 1. What % of query fees do Curators earn? +### 1. キュレータはクエリフィーの何%を獲得できますか? -By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. +サブグラフにシグナリングすることで、そのサブグラフが生成する、全てのクエリフィーのシェアを得ることができます。 全てのクエリーフィーの 10%は、キュレーターのキュレーションシェアに比例してキュレーターに支払われます。 この 10%はガバナンスの対象となります。 -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. シグナルを出すのに適した質の高いサブグラフはどのようにして決めるのですか? -Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +高品質のサブグラフを見つけるのは複雑な作業ですが、さまざまな方法でアプローチできます。 キュレーターとしては、クエリボリュームを牽引している信頼できるサブグラフを探したいと考えます。 信頼できるサブグラフは、それが完全で正確であり、Dap のデータニーズをサポートしていれば価値があるかもしれません。 アーキテクチャが不十分なサブグラフは、修正や再公開が必要になるかもしれませんし、失敗に終わることもあります。 キュレーターにとって、サブグラフが価値あるものかどうかを評価するために、サブグラフのアーキテクチャやコードをレビューすることは非常に重要です。 その結果として: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- キュレーターはネットワークの理解を利用して、個々のサブグラフが将来的にどのように高いまたは低いクエリボリュームを生成するかを予測することができます。 +- キュレーターは、グラフ・エクスプローラーで利用可能なメトリクスも理解する必要があります。 過去のクエリボリュームやサブグラフの開発者が誰であるかといったメトリクスは、サブグラフがシグナリングする価値があるかどうかを判断するのに役立ちます。 -### 3. What’s the cost of upgrading a subgraph? +### 3. サブグラフのアップグレードにかかるコストは? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. +キュレーション株式を新しいサブグラフのバージョンに移行すると、1%のキュレーション税が発生します。 キュレーターは、サブグラフの最新バージョンへの登録を選択することができます。 キュレーターのシェアが新しいバージョンに自動移行されると、キュレーターはキュレーション税の半分、つまり0.5%を支払うことになります。これは、サブグラフのアップグレードがガスを消費するオンチェーンアクションであるためです。 -### 4. How often can I upgrade my subgraph? +### 4. どのくらいの頻度でサブグラフをアップグレードできますか? -It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. +サブグラフのアップグレードは、あまり頻繁に行わないことをお勧めします。 詳しくは上記の質問を参照してください。 -### 5. Can I sell my curation shares? +### 5. キュレーションのシェアを売却することはできますか? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. +キュレーションシェアは、他の ERC20 トークンのように「買う」ことも「売る」こともできません。 キュレーションシェアは、特定のサブグラフのボンディングカーブに沿って、ミント(作成)またはバーン(破棄)することしかできません。 新しいシグナルをミントするのに必要な GRT の量と、既存のシグナルをバーンしたときに受け取る GRT の量は、そのボンディングカーブによって決まります。 キュレーターとしては、GRT を引き出すためにキュレーションシェアをバーンすると、最初に預けた GRT よりも多くの GRT を手にすることもあれば、少なくなることもあることを把握しておく必要があります。 -Still confused? Check out our Curation video guide below: +まだ不明点がありますか? その他の不明点に関しては、 以下のキュレーションビデオガイドをご覧ください:
From 8d0e83707a3e929a067d6955d6352a3020d684c6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:43 -0500 Subject: [PATCH 227/241] New translations curating.mdx (Korean) --- pages/ko/curating.mdx | 104 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/pages/ko/curating.mdx b/pages/ko/curating.mdx index 203e77b352cf..456deec666f7 100644 --- a/pages/ko/curating.mdx +++ b/pages/ko/curating.mdx @@ -2,102 +2,102 @@ title: 큐레이팅 --- -Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. +큐레이터들은 더 그래프의 탈중앙화 경제에 매우 중요한 역할을 합니다. 이들은 웹3 생태계에 대한 지식을 활용하여 그래프 네트워크에 의해 색인화되어야 하는 서브그래프에 대한 평가와 신호를 수행합니다. 탐색기를 통해 큐레이터는 네트워크 데이터를 보고 신호 전달 결정을 내릴 수 있습니다. 더그래프 네트워크는 양질의 서브그래프에 신호를 보내는 큐레이터에게 서브그래프가 생성하는 쿼리 수수료에 대한 몫을 보상합니다. 큐레이터들은 이른 신호를 보내도록 경제적으로 장려된다. 큐레이터의 이러한 신호들은 신호되어진 서브그래프들로부터 데이터를 처리하거나 인덱싱 할 수 있는 인덱서들에게 중요합니다. -When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. +신호를 보낼 때 큐레이터는 서브그래프의 특정 버전에 신호를 보내거나 자동 마이그레이션을 사용하여 신호를 보내기로 결정할 수 있습니다. 자동 마이그레이션을 사용하여 신호를 보낼 때 큐레이터의 공유는 항상 개발자가 게시한 최신 버전으로 업그레이드됩니다. 만약, 여러분이 이를 대신하여 특정 버전에서 신호를 보내기로 결정하면 공유는 항상 이 특정 버전으로 유지됩니다. -Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) +큐레이션은 위험하다는 것을 기억하시길 바랍니다. 여러분들이 확실히 신뢰할 수 있는 서브그래프에 대한 큐레이션이 진행되도록 노력일 기울이시길 바랍니다. 서브그래프의 제작은 비허가형이기 때문에, 사람들은 서브그래프를 만들고 그들이 원하는 어떠한 이름으로도 명명할 수 있습니다. 큐레이션 위험에 대한 더 많은 가이드를 얻기 위해 [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/)를 확인하시길 바랍니다. -## Bonding Curve 101 +## 본딩 커브 101 -First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. +먼저 우리가 한발짝 물러나 보도록 하겠습니다. 각 서브그래프에는 유저가 시그날을 해당 커브**에** 추가할 때 큐레이션 쉐어가 발행되는 본딩 커브가 존재합니다. 각 서브그래프의 본딩 커브는 특별합니다. 본딩커브는 서브그래프에서 큐레이션 쉐어를 발행하는 가격이 발행된 쉐어 수에 걸쳐 선형적으로 증가하도록 설계되었습니다. -![Price per shares](/img/price-per-share.png) +![シェアあたりの価格](/img/price-per-share.png) -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: +결과적으로 가격이 선형적으로 상승하므로 시간이 지남에 따라 쉐어를 구입하는 데 더 많은 비용이 소요됩니다. 여기 저희가 무엇을 의미하는지에 대한 예시가 있습니다. 아래의 본딩 커브를 보시죠. -![Bonding curve](/img/bonding-curve.png) +![ボンディングカーブ](/img/bonding-curve.png) -Consider we have two curators that mint shares for a subgraph: +서브그래프에 대한 쉐어를 발행하는 큐레이터가 두 명 있다고 가정해 봅시다. -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. +- 큐레이터 A는 서브그래프에 신호를 보낸 첫 번째 사람입니다. 120,000 GRT를 커브에 추가함으로써, 그들은 2000개의 쉐어를 발행할 수 있습니다. +- 어느 시점 이후에 큐레이터 B의 신호가 서브그래프에 전달됩니다. 큐레이터 A와 동일한 양의 쉐어를 받기 위해서는 360,000 GRT를 커브에 추가해야 합니다. +- 두 큐레이터가 큐레이터 총 쉐어의 절반씩을 보유하고 있기 때문에 큐레이터 로열티는 똑같이 분배됩니다. +- 만약 큐레이터 중 누구든지 2000 큐레이션 쉐어를 소각할 경우 그들은 360,000 GRT를 받게 됩니다. +- 나머지 큐레이터는 이제 해당 서브그래프에 대한 모든 큐레이터 로열티를 받게 됩니다. 만약 그들이 GRT를 출금하기 위해 쉐어를 소각하는 경우 120,000 GRT를 받게 됩니다. +- **TLDR:** 해당 큐레이션 쉐어의 GRT 가치는 본딩 커브에 의해 결정되며 변동성이 있을 수 있습니다. 큰 손실을 입을 수 있는 가능성이 존재합니다. 이른 신호를 보낸다는 것은 여러분들이 각 쉐어를 위해 더 적은 GRT를 넣는다는 것을 의미합니다. 나아가서, 이는 동일한 서브그래프에 대해 이후 참여하는 큐레이터보다 GRT당 큐레이터 로열티를 더 많이 받는다는 의미이기도 합니다. -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** +일반적으로, 본딩 커브는 토큰 공급과 자산 가격 사이의 관계를 정의하는 수학적 곡선입니다. 서브그래프 큐레이션의 특별한 경우에, **각 서브그래프 쉐어의 가격은 각 토큰이 투자될 때마다 증가합니다.** 그리고 **각 토큰 쉐어의 가격은 각 토큰이 판매될 때 마다 감소합니다.** -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. +더그래프의 경우에는, [Bancor의 본딩 커브 공식 구현](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA)이 활용됩니다. -## How to Signal +## 신호를 보내는 방법 -Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) +이제 저희는 본딩 커브의 작동 방식에 대한 기본 내용을 알아보았는데요, 서브 그래프에서 신호를 보내는 방법은 다음과 같습니다. 더그래프 탐색기의 큐레이터 탭 내에서 큐레이터는 네트워크 통계를 기반으로 특정 서브그래프에 신호전달 혹은 신호해제를 할 수 있습니다. 탐색기에서 이 작업을 수행하는 방법에 대한 단계별 개요를 알아보기 위해, [이곳](https://thegraph.com/docs/explorer)을 클릭하시길 바랍니다. -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +또한 그들은 그 서브그래프의 최신 생산 빌드에 신호를 자동으로 이전하도록 선택할 수도 있습니다. 둘 다 유효한 전략이며 나름대로 장단점이 존재합니다. -Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +특정 버전의 신호는 하나의 서브그래프가 여러 개의 dapp에 의해 사용될 때 특히 유용합니다. 하나의 dapp은 새로운 기능들과 함께 서브그래프를 정기적으로 업데이트해야 할 수도 있습니다. 다른 dapp에서는 테스트를 잘 거친 이전 서브그래프 버전을 사용하는 것을 선호할 수 있습니다. 최초 큐레이션 시, 1%의 표준 세금이 부과됩니다. -Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. +여러분들의 신호가 최신 프로덕션 빌드로 자동 이전 되도록 하는 것은 여러분들이 쿼리 수수료를 계속 발생시키는 데 유용할 수 있습니다. 여러분들이 매번 큐레이션을 할 때마다, 1퍼센트의 큐레이션 세금이 부과됩니다. 또한 여러분들은 매번의 마이그레이션 마다 0.5%의 큐레이션 세금을 지불해야합니다. 서브그래프 개발자는 새로운 버전을 자주 발행하는 것을 꺼려합니다. - 그들은 자동으로 마이그레이션된 모든 큐레이션 쉐어에 대해 0.5%의 큐레이션 세금을 내야 합니다. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. +> **참고**: 특정 서브그래프를 신호하는 첫 번째 주소는 첫 번째 큐레이터로 간주되며 첫 번째 큐레이터는 큐레이션 쉐어 토큰을 초기화하고, 본딩 커브를 초기화하며, 또한 토큰을 그래프 프록시로 전송하기 때문에 이어서 참여하는 다른 큐레이터들 보다 훨씬 더 많은 가스 집약적인 작업을 수행해야 합니다. -## What does Signaling mean for The Graph Network? +## 더그래프 네트워크에서 신호를 보내는 것은 무엇을 의미할까요? -For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. +최종 소비자가 서브그래프를 쿼리할 수 있으려면 먼저 서브그래프를 인덱싱해야 합니다. 인덱싱은 파일, 데이터 및 메타데이터를 보고, 카탈로그를 작성한 다음 인덱싱하여 원하는 결과를 더 빨리 찾을 수 있도록 하는 프로세스입니다. 서브그래프의 데이터가 검색 가능하게 하기 위해서, 데이터 구성이 필요합니다. -And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. +따라서, 만약 인덱서들이 어떤 서브그래프를 인덱싱해야 하는지 추측해야만 한다면, 어떤 서브그래프가 좋은지 검증할 방법이 없기 때문에 강력한 쿼리 비용을 얻을 가능성은 낮습니다. 큐레이션을 시작합니다. -Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! +큐레이터는 그래프 네트워크를 효율적으로 만들고, 시그널링은 큐레이터가 인덱서에 어떤 서브그래프가 인덱싱 하기에 좋다는 것을 알리기 위해 사용하는 프로세스입니다. 여기서 서브그래프를 위해 본딩 커브에 GRT가 추가됩니다. 인덱서들은 큐레이터의 신호를 본질적으로 신뢰할 수 있습니다. 그 이유는 신호를 보냄에 있어, 큐레이터가 발행하는 서브그래프의 큐레이션 쉐어는 해당 서브그래프가 향후 제공하게 될 쿼리 수수료에 대한 비율로서 적용되기 때문입니다. 큐레이션 신호는 GCS(Graph Curation Shares)라고 불리우는 ERC20 토큰으로 표현됩니다. 더 많은 쿼리 수수료를 얻고자 하는 큐레이터는 네트워크에 대한 수수료 흐름을 크게 발생시킬 것으로 예측되는 서브그래프에 GRT 신호를 보내야 합니다.큐레이터는 나쁜 행위로 인해 슬래싱 패널티를 받지는 않지만, 네트워크의 무결성을 해칠 수 있는 형편없는 의사결정에 대한 의욕을 꺾기 위해 큐레이터에게 부과되는 예치세가 존재합니다. 큐레이터는 만약에 그들이 낮은 품질의 서브그래프를 큐레이팅 하기로 선택할 경우, 처리 할 쿼리가 적거나, 이러한 쿼리를 처리할 인덱서들이 적기 때문에 더 낮은 쿼리 수수료를 취득하게 될 것입니다. 아래의 다이아그램을 보시죠! -![Signaling diagram](/img/curator-signaling.png) +![シグナリング ダイアグラム](/img/curator-signaling.png) -Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). +인덱서는 더그래프 탐색기(아래의 스크린샷 참조)에 표시되는 큐레이션 신호를 기반으로 인덱싱할 서브그래프를 찾을 수 있습니다. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![エクスプローラー サブグラフ](/img/explorer-subgraphs.png) -## Risks +## 위험요소 -1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. The Graph의 쿼리 시장은 본질적으로 젊고, 초기 시장의 변동성으로 인해 APY %가 예상보다 낮을 수 있습니다. +2. 큐레이션 수수료 - 큐레이터가 서브그래프상에 GRT 신호를 보낼 때, 그들은 1%의 큐래이션 세를 내야합니다. 이 수수료는 소각되며, 나머지는 본딩 커브의 예비 공급량에 예치됩니다. +3. 큐레이터들이 GRT를 출금하기 위해 그들의 쉐어를 소각할 경우, 잔존하는 쉐어들의 GRT 가치는 줄어들 것입니다. 어떤 경우에는 큐레이터들이 **한꺼번에** 쉐어를 소각하기로 결정할 수도 있다는 것을 주의하시길 바랍니다. 이러한 상황은 만약 dapp 개발자가 서브그래프의 버전/개선 및 쿼리를 중지하거나 어떠한 서브그래프가 실패할 경우 일반적으로 발생할 수 있습니다. 결과적으로, 잔존 큐레이터들은 아마 오직 그들의 초기 GRT의 일부만을 출금 가능할 수도 있습니다. 위험 프로필이 낮은 네트워크 역할을 위해, \[위임자\] (https://thegraph.com/docs/delegating)를 읽어보시기 바랍니다. +4. 어떤 서브그래프는 버그로 인해 실패할 수도 있습니다. 실패한 서브그래프에는 쿼리 수수료가 부과되지 않습니다. 따라서 개발자가 버그를 수정하고 새 버전을 배포할 때까지 기다려야 합니다. + - 만약 여러분들이 최신 버전의 서브그래프에 가입하신 경우에, 여러분들의 쉐어는 해당 신규 버전으로 자동 마이그레이션될 것입니다. 이는 0.5%의 큐레이션 세금이 부과될 것입니다. + - 만약 여러분이 특정 서브그래프 버전에 신호를 보냈지만 그것이 실패한다면, 여러분은 여러분의 큐레이션 쉐어를 수동으로 소각해야 할 것입니다. 큐레이션 커브에 처음 여러분들이 보관한 GRT보다 더 많거나 적은 GRT를 수령하실 수 있다는 것을 인지하시길 바랍니다. 이는 큐레이터 역할과 관련된 위험요소입니다. You can then signal on the new subgraph version, thus incurring a 1% curation tax. -## Curation FAQs +## 큐레이션 FAQ -### 1. What % of query fees do Curators earn? +### 1. 큐레이터들은 쿼리 수수료의 몇 %를 얻나요? -By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. +서브그래프에 신호를 보냄으로써, 여러분들은 이 서브그래프가 생성하는 모든 쿼리 수수료의 쉐어를 얻게 됩니다. 모든 쿼리 수수료의 10%는 각자의 큐레이터 쉐어에 비례하여 각 큐레이터들에게 분배됩니다. 이 10%는 거버넌스 대상입니다. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. 어떤 서브그래프들이 신호를 보낼 고품질의 서브래프인지 어떻게 결정하나요? -Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +고품질 서브그래프를 찾는 것은 복잡한 작업이지만 다양한 방식의 접근이 가능합니다. 큐레이터로서, 여러분들은 쿼리 볼륨을 높이는 신뢰할 수 있는 서브그래프를 찾길 원하실 것입니다. 신뢰할 수 있는 서브그래프는 완전하고, 정확하며, dapp의 데이터 요구 사항들을 적절히 지원하는 경우 가치가 있을 것입니다. 잘못 구성된 서브그래프는 수정 혹은 다시 게시되어야 하지만, 결국에 실패할 수도 있습니다. 큐레이터는 어떠한 서브그래프가 가치가 있는지 평가하기 위해, 서브그래프의 아키텍처 또는 코드를 검토하는 것이 중요합니다. 결론적으로; -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- 큐레이터는 네트워크에 대한 이해를 바탕으로 개별 서브그래프가 미래에 어떻게 더 높거나 더 낮은 쿼리 볼륨을 생성할 수 있는지 시도 및 예측을 해볼 수 있습니다. +- 큐레이터는 그래프 탐색기를 통해 사용할 수 있는 메트릭스 또한 이해해야 합니다. 과거 쿼리 볼륨 및 서브그래프 개발자 정보와 같은 메트릭스는 서브그래프가 신호를 보낼 가치가 있는지 여부를 결정하는 데 도움이 될 수 있습니다. -### 3. What’s the cost of upgrading a subgraph? +### 3. 서브그래프의 업그레이드 비용은 얼마인가요? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. +여러분들의 큐레이션 쉐어를 새 서브그래프 버전으로 마이그레이션하시면, 1%의 큐레이션 세금이 발생합니다. 큐레이터는 서브그래프의 최신 버전을 구독하도록 선택할 수 있습니다. 큐레이터 쉐어가 새 버전으로 자동 마이그레이션 되면 큐레이터들은 큐레이션 세금의 절반 또한 지불합니다. 즉, 0.5%를 지불하게 되는데, 이는 서브그래프를 업그레이드하는 일은 가스를 소모하는 온체인 작업이기 때문입니다. -### 4. How often can I upgrade my subgraph? +### 4. 저는 얼마나 자주 저의 서브그래프를 업그레이드 할 수 있나요? -It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. +서브그래프를 너무 자주 업그레이드하지 않으시길 권장합니다. 자세한 내용은 위의 질문을 참조하시길 바랍니다. -### 5. Can I sell my curation shares? +### 5. 저는 저의 큐레이션 쉐어들을 판매할 수 있나요? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. +큐레이션 쉐어들은 아마 여러분들이 익숙하실 다른 ERC20 토큰들 처럼 "구매" 또는 "판매" 될 수 없습니다. 이는 오직 특정 서브그래프를 위한 본딩 커브에서 생성되고 소각될 수 있습니다. 새로운 신호를 만드는 데 필요한 GRT의 양과 기존 신호를 소각할 때 받는 GRT의 양은 해당 본딩 커브에 의해 결정됩니다. 큐레이터로서, 여러분들은 GRT를 인출하기 위해 큐레이션 쉐어를 소각할 때 처음에 예치한 것보다 많거나 적은 GRT를 수령할 수 있음을 인지하셔야 합니다. -Still confused? Check out our Curation video guide below: +아직도 혼란스러우신가요? 아래의 큐레이션 비디오 가이드를 확인해보시길 바랍니다.
From ed46692e6c3bf3fddf712720164b8809dc074d87 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:44 -0500 Subject: [PATCH 228/241] New translations curating.mdx (Chinese Simplified) --- pages/zh/curating.mdx | 96 +++++++++++++++++++++---------------------- 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/pages/zh/curating.mdx b/pages/zh/curating.mdx index 8faa88482bf7..66ed9fe2bd2a 100644 --- a/pages/zh/curating.mdx +++ b/pages/zh/curating.mdx @@ -2,96 +2,96 @@ title: 策展 --- -Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signalling decisions. The Graph Network rewards curators that signal on good quality subgraphs earn a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signalled subgraphs. +策展人对于 The Graph 去中心化的经济至关重要。 他们利用自己对 web3 生态系统的了解,对应该被 The Graph 网络索引的子图进行评估并发出信号。 通过资源管理器,策展人能够查看网络数据以做出信号决定。 The Graph 网络对那些在优质子图上发出信号的策展人给予奖励,并从子图产生的查询费中分得一部分。 在经济上,策展人被激励着尽早发出信号。 这些来自策展人的线索对索引人来说非常重要,他们可以对这些发出信号的子图进行处理或索引。 -When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a curator’s shares will always be upgraded to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version. +在发出信号时,策展人可以决定在子图的一个特定版本上发出信号,或者使用自动迁移发出信号。 当使用自动迁移发出信号时,策展人的份额将始终升级到由开发商发布的最新版本。 如果你决定在一个特定的版本上发出信号,股份将始终保持在这个特定的版本上。 -Remember that curation is risky. Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) +Remember that curation is risky. 请做好你的工作,确保你在你信任的子图上进行策展。 请做好你的工作,确保你在你信任的子图上进行策展。 创建子图是没有权限的,所以人们可以创建子图,并称其为任何他们想要的名字。 关于策展风险的更多指导,请查看 [The Graph Academy 的策展指南。 ](https://thegraph.academy/curators/) -## Bonding Curve 101 +## 联合曲线 101 -First we take a step back. Each subgraph has a bonding curve on which curation shares are minted, when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. +首先,我们退一步讲。 每个子图都有一条粘合曲线,当用户在曲线上 **添加**信号时,策展份额就在这条曲线上被铸造出来。 每个子图的粘合曲线都是独一无二的。 粘合曲线的结构是这样的:在一个子图上铸造一个策展份额的价格随着铸造的份额数量而线性增加。 ![Price per shares](/img/price-per-share.png) -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: +因此,价格是线性增长的,这意味着随着时间的推移,购买股票的成本会越来越高。 这里有一个例子说明我们的意思,请看下面的粘合曲线。 -![Bonding curve](/img/bonding-curve.png) +![联合曲线](/img/bonding-curve.png) -Consider we have two curators that mint shares for a subgraph: +考虑到我们有两个策展人,他们为一个子图铸造了股份: -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph at some point in time later. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- If any of the curators were now to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signalling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. +- 策展人 A 是第一个对子图发出信号的人。 通过在曲线中加入 120,000 GRT,他们能够铸造出 2000 股。 +- 策展人 B 在之后的某个时间点在子图上发出信号。 为了获得与策展人 A 相同数量的股票,他们必须在曲线中加入 360,000 GRT。 +- 由于两位策展人都持有策展人股份总数的一半,他们将获得同等数量的策展人使用费。 +- 如果任何一个策展人现在烧掉他们的 2000 个策展份额,他们将获得 360,000 GRT。 +- 剩下的策展人现在将收到该子图的所有策展人使用费。 如果他们烧掉他们的股份来提取 GRT,他们将得到 12 万 GRT。 +- **TLDR:** 策展人股份的 GRT 估值是由粘合曲线决定的,可能会有波动。 有可能出现大的收益,也有可能出现大的损失。 提前发出信号意味着你为每只股票投入的 GRT 较少。 推而广之,这意味着在相同的子图上,你比后来的策展人在每个 GRT 上赚取更多的策展人使用费。 -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.** +一般来说,粘合曲线是一条数学曲线,定义了代币供应和资产价格之间的关系。 在子图策展的具体情况下,\*\*资产(子图份额)的价格随着每一个代币的投入而增加,资产的价格随着每一个代币的出售而减少。 -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. +在 The Graph 的案例中, [Bancor 对粘合曲线公式的实施](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) 被利用。 -## How to Signal +## 如何进行信号处理 -Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step by step overview of how to do this in the Explorer, [click here.](/explorer) +现在我们已经介绍了关于粘合曲线如何工作的基本知识,这就是你将如何在子图上发出信号。 在 The Graph 资源管理器的策展人选项卡中,策展人将能够根据网络统计数据对某些子图发出信号和取消信号。 关于如何在资源管理器中做到这一点的一步步概述,请[点击这里。 ](https://thegraph.com/docs/explorer) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +策展人可以选择在特定的子图版本上发出信号,或者他们可以选择让他们的策展份额自动迁移到该子图的最新生产版本。 这两种策略都是有效的,都有各自的优点和缺点。 -Signalling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might have the need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +当一个子图被多个 dApp 使用时,在特定版本上发出信号特别有用。 一个 dApp 可能需要定期更新子图的新功能。 另一个 dApp 可能更喜欢使用旧的、经过良好测试的子图版本。 在初始策展时,会产生 1%的标准税。 -Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay 0.5% curation tax on all auto-migrated curation shares. +让你的策展份额自动迁移到最新的生产构建,对确保你不断累积查询费用是有价值的。 每次你策展时,都会产生 1%的策展税。 每次迁移时,你也将支付 0.5%的策展税。 不鼓励子图开发人员频繁发布新版本--他们必须为所有自动迁移的策展份额支付 0.5%的策展税。 -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy. +> **注意**: 第一个给特定子图发出信号的地址被认为是第一个策展人,将不得不消耗比之后其他策展人更多的燃料工作,因为第一个策展人初始化了策展份额代币,初始化了粘合曲线,还将代币转移到 Graph 代理。 -## What does Signaling mean for The Graph Network? +## 信号对 The Graph 网络意味着什么? -For end consumers to be able to query a subgraph, the subgraph must first be indexed. Indexing is a process where files, data, and metadata are looked at, cataloged, and then indexed so that results can be found faster. In order for a subgraph’s data to be searchable, it needs to be organized. +为了让终端消费者能够查询一个子图,该子图必须首先被索引。 索引是一个过程,对文件、数据和元数据进行查看、编目,然后编制索引,这样可以更快地找到结果。 为了使子图的数据可以被搜索到,它需要被组织起来。 -And so, if Indexers had to guess which subgraphs they should index, there would be a low chance that they would earn robust query fees because they’d have no way of validating which subgraphs are good quality. Enter curation. +因此,如果索引人不得不猜测他们应该索引哪些子图,那么他们赚取强大的查询费用的机会就会很低,因为他们没有办法验证哪些子图是高质量的。 进入策展阶段。 -Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network.Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low quality subgraph, since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below! +策展人使 The Graph 网络变得高效,信号是策展人用来让索引人知道一个子图是好的索引的过程,其中 GRT 被存入子图的粘合曲线。 索引人可以从本质上信任策展人的信号,因为一旦发出信号,策展人就会为该子图铸造一个策展份额,使他们有权获得该子图所带来的部分未来查询费用。 策展人的信号以ERC20代币的形式表示,称为Graph Curation Shares(GCS)。 想赚取更多查询费的策展人应该向他们预测会给网络带来大量费用的子图发出他们的 GRT 信号。 策展人不能因为不良行为而被砍掉,但有一个对策展人的存款税,以抑制可能损害网络完整性的不良决策。 如果策展人选择在一个低质量的子图上进行策展,他们也会赚取较少的查询费,因为有较少的查询需要处理,或者有较少的索引人处理这些查询。 请看下面的图! ![Signaling diagram](/img/curator-signaling.png) -Indexers can find subgraphs to index based on curation signals they see in The Graph Explorer (screenshot below). +索引人可以根据他们在 The Graph 浏览器中看到的策展信号找到要索引的子图(下面的截图)。 ![Explorer subgraphs](/img/explorer-subgraphs.png) -## Risks +## 风险 -1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. -3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/delegating). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signalled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. 在 The Graph,查询市场本来就很年轻,由于市场动态刚刚开始,你的年收益率可能低于你的预期,这是有风险的。 +2. 策展费 - 当策展人对子图发出 GRT 信号时,他们会产生 1%的策展税。 这笔费用被烧掉,剩下的被存入绑定曲线的储备供应中。 +3. 当策展人烧掉他们的股份以提取 GRT 时,剩余股份的 GRT 估值将被降低。 请注意,在某些情况下,策展人可能决定 **一次性**烧掉他们的股份。 这种情况可能很常见,如果一个 dApp 开发者停止版本/改进和查询他们的子图,或者如果一个子图失败。 因此,剩下的策展人可能只能提取他们最初 GRT 的一小部分。 关于风险较低的网络角色,请看委托人 \[Delegators\](https://thegraph.com/docs/delegating). +4. 一个子图可能由于错误而失败。 一个失败的子图不会累积查询费用。 因此,你必须等待,直到开发人员修复错误并部署一个新的版本。 + - 如果你订阅了一个子图的最新版本,你的股份将自动迁移到该新版本。 这将产生 0.5%的策展税。 + - 如果你已经在一个特定的子图版本上发出信号,但它失败了,你将不得不手动烧毁你的策展税。 请注意,你可能会收到比你最初存入策展曲线更多或更少的 GRT,这是作为策展人的相关风险。 然后你可以在新的子图版本上发出信号,从而产生1%的策展税。 -## Curation FAQs +## 策展常见问题 -### 1. What % of query fees do Curators earn? +### 1. 策展人能赚取多少百分比的查询费? -By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro rata to their curation shares. This 10% is subject to governance. +通过在一个子图上发出信号,你将获得这个子图产生的所有查询费用的份额。 所有查询费用的 10%将按策展人的策展份额比例分配给他们。 这 10%是受管理的。 -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. 我如何决定哪些子图是高质量的信号? -Finding high quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +寻找高质量的子图是一项复杂的任务,但它可以通过许多不同的方式来实现。 作为策展人,你要寻找那些推动查询量的值得信赖的子图。 这些值得信赖的子图是有价值的,因为它们是完整的,准确的,并支持 dApp 的数据需求。 一个架构不良的子图可能需要修改或重新发布,也可能最终失败。 策展人审查子图的架构或代码,以评估一个子图是否有价值,这是至关重要的。 因此: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through the Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- 策展人可以利用他们的市场知识,尝试预测单个子图在未来可能产生更多或更少的查询量 +- 策展人还应该了解通过 The Graph 浏览器提供的指标。 像过去的查询量和子图开发者是谁这样的指标可以帮助确定一个子图是否值得发出信号。 -### 3. What’s the cost of upgrading a subgraph? +### 3. 升级一个子图的成本是多少? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action which costs gas. +将你的策展份额迁移到一个新的子图版本会产生 1%的策展税。 策展人可以选择订阅子图的最新版本。 当策展人质押被自动迁移到一个新的版本时,策展人也将支付一半的策展税,即 0.5%,因为升级子图是一个链上动作,需要花费交易费。 -### 4. How often can I upgrade my subgraph? +### 4. 我多长时间可以升级我的子图? -It’s suggested that you don’t upgrade your subgraphs too frequently. See the question above for more details. +建议你不要太频繁地升级你的子图。 更多细节请见上面的问题。 -### 5. Can I sell my curation shares? +### 5. 我可以出售我的策展股份吗? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint new signal, and the amount of GRT you receive when you burn your existing signal, is determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. +策展是一个开放的市场,任何人都可以购买(在这种情况下,"mint"),或出售("burn")特定子图的策划份额。 它们只能沿着特定子图的粘合曲线被铸造(创建)或烧毁(销毁)。 铸造新信号所需的 GRT 数量,以及当你烧毁现有信号时收到的 GRT 数量,是由该粘合曲线决定的。 作为一个策展人,你需要知道,当你燃烧你的策展份额来提取 GRT 时,你最终可能会得到比你最初存入的更多或更少的 GRT。 -Still confused? Check out our Curation video guide below: +还有困惑吗? 点击下面查看策展视频指导:
From 504d991d35d9a3027b77b5ff49688379637967fa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:46 -0500 Subject: [PATCH 230/241] New translations delegating.mdx (Spanish) --- pages/es/delegating.mdx | 84 ++++++++++++++++++++--------------------- 1 file changed, 42 insertions(+), 42 deletions(-) diff --git a/pages/es/delegating.mdx b/pages/es/delegating.mdx index 3c71bb2d7b41..30e6905758a4 100644 --- a/pages/es/delegating.mdx +++ b/pages/es/delegating.mdx @@ -2,92 +2,92 @@ title: delegación --- -Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. +Los delegadores no pueden ser penalizados por mal comportamiento, pero existe una tarifa inicial de depósitos que desalienta a los delegadores a tomar malas decisiones que puedan comprometer la integridad de la red. -## Delegator Guide +## Guía del delegador -This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: +Esta guía explicará cómo ser un delegador efectivo en Graph Network. Los delegadores comparten las ganancias del protocolo junto con todos los indexadores en base a participación delegada. Un delegador deberá usar su propio discernimiento para elegir los mejores indexadores, en base a una serie de factores. Tenga en cuenta que esta guía no expondrá los pasos necesarios para la configuración adecuada de Metamask, ya que esa información está expuesta en internet. Hay tres secciones en está guía: -- The risks of delegating tokens in The Graph Network -- How to calculate expected returns as a delegator -- A Video guide showing the steps to delegate in the Graph Network UI +- Los riesgos de delegar tokens en la red de The Graph +- Cómo calcular los rendimientos que te esperan siendo delegador +- Una guía visual (en vídeo) que muestra los pasos para delegar a través de la interfaz de usuario ofrecida por The Graph -## Delegation Risks +## Riesgos al delegar -Listed below are the main risks of being a delegator in the protocol. +A continuación se enumeran los principales riesgos de ser un delegador en el protocolo. -### The delegation fee +### La tarifa de delegación -It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. +Es importante comprender que cada vez que delegues, se te cobrará un 0,5%. Esto significa que si delegas 1000 GRT, automáticamente quemarás 5 GRT. -This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. +Esto significa que para estar seguro, un delegador debe calcular cuál será su retorno tras delegar a un Indexer. Por ejemplo, un delegador puede calcular cuántos días le tomará recuperar la tarifa inicial de depósito correspondiente al 0,5% de su delegación. -### The delegation unbonding period +### Periodo de desvinculación (unstake) -Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. +Siempre que un delegador quiera anular su participación en la red, sus tokens están sujetos a un período de desvinculación equivalente a 28 días. Esto significa que no podrá transferir sus tokens o ganar alguna recompensa durante los próximos 28 días. -One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT. +Una cosa a considerar también, es elegir sabiamente al Indexador. Si eliges un Indexador que no es confiable, o que no está haciendo un buen trabajo, eso te impulsará a querer anular la delegación, lo que significa que perderás muchas oportunidades de obtener recompensas, la cual puede ser igual de mala que quemar GRT.
- ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day - unbonding period._ + Ten en cuenta la tarifa del 0,5% en la interfaz de usuario para delegar, así como el período de desvinculación de 28 + días.
-### Choosing a trustworthy indexer with a fair reward payout for delegators +### Elige un indexador fiable, que pague recompensas justas a sus delegadores -This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. +Está es una parte importante que debes comprender. Primero, analicemos tres valores muy importantes, los cuales son conocidos como Parámetros de Delegación. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards. +Indexing Reward Cut: también conocido como el recorte de recompensas para el indexador, consiste en una porción de las recompensas generadas, las cuales se quedará el Indexer por el trabajo hecho. Eso significa que, si este valor se establece en 100%, no recibirás ninguna recompensa al ser delegador de este Indexer. Si ves el 80%, eso significa que como delegador, recibirás el 20% de dichas recompensas. Una nota importante: al comienzo de la red, las recompensas de indexación (Indexing Rewards) representará la mayoría de las recompensas.
- ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The - middle one is giving delegators 20%. The bottom one is giving delegators ~83%.* + El indexador de arriba, está dando a los delegadores el 90% de las recompensas generadas. El del medio está dando a + los delegadores un 20%. ...y finalmente, el de abajo está otorgando un ~83% a sus delegadores.
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. +- Query Fee Cut: esté funciona de igual forma que el Indexing Reward Cut. Sin embargo, esto funciona específicamente para un reembolso de las tarifas por cada consulta que cobrará el Indexador. Cabe resaltar que en los inicios de la red, los retornos de las tarifas por consulta serán muy pequeños en comparación con la recompensa de indexación. Se recomienda prestar atención a la red para determinar cuándo las tarifas por consulta dentro de la red, sean significativas. -As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. +Como puedes ver, hay que pensar mucho a la hora de elegir al indexador correcto. Es por eso que te recomendamos encarecidamente que eches un vistazo al Discord de The Graph, para determinar quiénes son los Indexadores con la mayor reputación social y técnica, que puedan lograr beneficiar a los delegadores de manera sostenible. Muchos de los Indexadores son muy activos en Discord y estarán encantados de responder a tus preguntas. Muchos de ellos han Indexado durante meses en la red de prueba y están haciendo todo lo posible para ayudar a los delegadores a obtener un buen rendimiento, ya que mejora la salud y el éxito de la red. -### Calculating delegators expected return +### Calculando el retorno esperado para los delegadores -A Delegator has to consider a lot of factors when determining the return. These +Un delegador debe considerar muchos factores al determinar un retorno. Estos son expuestos a continuación: -- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Un delegador técnico también puede ver la capacidad de los Indexadores para usar los tokens que han sido delegados y la capacidad de disponibilidad a su favor. Si un in Indexador no está asignando todos los tokens disponibles, no está obteniendo el beneficio máximo que podría obtener para sí mismo o para sus delegadores. +- Por ahora, en la red, un Indexador puede optar por cerrar una asignación en cualquier momento y cobrar las recompensas dentro del primer día y el día 28. Por ende, es posible que un Indexador tenga muchas recompensas por recolectar y que por ello, sus recompensas totales sean bajas. Esto debe tenerse en cuenta durante los primeros días. -### Considering the query fee cut and indexing fee cut +### Siempre tenga en cuenta la tarifa por consulta y el recorte de recompensas para el Indexador -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: +Como se describe en las secciones anteriores, debes elegir un Indexador que sea transparente y honesto sobre cómo gestiona el recorte de tarifas por consulta (Query Fee Cut) y sus recortes de tarifas por indexar (Indexing Fee Cuts). Un delegador también debe mirar el tiempo de enfriamiento establecidos para los parámetros (Parameters Cooldown), a fin de conocer cada cuánto tiempo puede cambiar sus parámetros. Una vez hecho esto, es bastante sencillo calcular la cantidad de recompensas que reciben los delegadores. La fórmula es: -![Delegation Image 3](/img/Delegation-Reward-Formula.png) +![Recorte de recompensas de indexación](/img/Delegation-Reward-Formula.png) -### Considering the indexers delegation pool +### Tener en cuenta el pool de delegación de cada Indexador -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: +Otra cosa que un delegador debe considerar es la participación que tendrá dentro del pool de delegación (Delegation Pool). Todas las recompensas de la delegación se comparten de manera uniforme, con un simple reequilibrio del pool, el cual es basado en la participación depositada dentro del mismo. Esto le da al delegador una participación del pool: -![Share formula](/img/Share-Forumla.png) +![Fórmula compartida](/img/Share-Forumla.png) -Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. +Usando esta fórmula, podemos ver que en realidad es posible que un indexador que ofrece solo el 20% a los delegadores, en realidad les dé a sus delegadores una recompensa aún mejor que un indexador que les da el 90%. -A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. +Por lo tanto, un delegador puede hacer sus propios cálculos a fin de determinar que, el Indexador que ofrece un 20% a los delegadores ofrece un mejor rendimiento. -### Considering the delegation capacity +### Considerar la capacidad de delegación -Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Otro aspecto a considerar es la capacidad de delegación. Actualmente, el promedio de delegación (Delegation Ratio) se establece en 16. Esto significa que si un Indexador ha colocado en stake en total 1.000.000 de GRT, su capacidad de delegación será de 16.000.000 en tokens GRT, los cuales pueden usarse para delegar dentro del protocolo. Cualquier token delegado por encima de esta cantidad diluirá todas las recompensas que recibirán los delegadores. -Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. +Imagina que un Indexador tiene 100.000.000 GRT delegados y su capacidad es de solo 16.000.000 de GRT. Esto significa que, efectivamente, 84.000.000 tokens GRT no se están utilizando para ganar tokens. Y todos los delegadores e incluso el mismo Indexador, están ganando menos recompensas de lo que deberían estar ganando. -Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. +Utilizando está formula, podemos discernir qué un Indexer el cual está ofreciendo un rendimiento del 20% a sus delegados, puede estar ofreciendo un mejor rendimiento que aquél Indexador que ofrece un 90% a sus delegadores. -## Video guide for the network UI +## Guía visual sobre la interfaz de la red -This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI. +Utilizando está formula, podemos discernir qué un Indexer el cual está ofreciendo un rendimiento del 20% a sus delegados, puede estar ofreciendo un mejor rendimiento que aquél Indexador que ofrece un 90% a sus delegadores.
From c3418e19a209cb739ccb87dde41df1dca45aedf6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:47 -0500 Subject: [PATCH 231/241] New translations delegating.mdx (Arabic) --- pages/ar/delegating.mdx | 76 ++++++++++++++++++++--------------------- 1 file changed, 37 insertions(+), 39 deletions(-) diff --git a/pages/ar/delegating.mdx b/pages/ar/delegating.mdx index 26a0e8a1415a..207be3e2a948 100644 --- a/pages/ar/delegating.mdx +++ b/pages/ar/delegating.mdx @@ -4,90 +4,88 @@ title: تفويض Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. -## Delegator Guide +## دليل المفوض This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: -- The risks of delegating tokens in The Graph Network -- How to calculate expected returns as a delegator -- A Video guide showing the steps to delegate in the Graph Network UI +- مخاطر تفويض التوكن tokens في شبكة The Graph +- كيفية حساب العوائد المتوقعة كمفوض +- فيديو يوضح خطوات التفويض في شبكة the Graph -## Delegation Risks +## مخاطر التفويض Delegation -Listed below are the main risks of being a delegator in the protocol. +القائمة أدناه هي المخاطر الرئيسية لكونك مفوضا في البروتوكول. -### The delegation fee +### رسوم التفويض -It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. +من المهم أن تفهم أنه في كل مرة تقوم فيها بالتفويض ، سيتم حرق 0.5٪. هذا يعني أنه إذا كنت تفوض 1000 GRT ، فستحرق 5 GRT تلقائيا. -This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. +هذا يعني أنه لكي يكون المفوض Delegator آمنا ، يجب أن يحسب عائده من خلال التفويض delegating للمفهرس. على سبيل المثال ، قد يحسب المفوض عدد الأيام التي سيستغرقها قبل أن يسترد ضريبة الإيداع ال 0.5٪ التي دفعها للتفويض. -### The delegation unbonding period +### فترة إلغاء التفويض -Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. +عندما يرغب أحد المفوضين في إلغاء التفويض ، تخضع التوكن الخاصة به إلى فترة 28 يوما وذلك لإلغاء التفويض. هذا يعني أنه لا يمكنهم تحويل التوكن الخاصة بهم ، أو كسب أي مكافآت لمدة 28 يوما. -One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT. +يجب اختيار المفهرس بحكمة. إذا اخترت مفهرسا ليس جديرا بالثقة ، أو لا يقوم بعمل جيد ، فستحتاج إلى إلغاء التفويض ، مما يعني أنك ستفقد الكثير من الفرص لكسب المكافآت والتي يمكن أن تكون سيئة مثل حرق GRT.
- ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day - unbonding period._ + لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض.
-### Choosing a trustworthy indexer with a fair reward payout for delegators +### اختيار مفهرس جدير بالثقة مع عائد جيد للمفوضين -This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. +هذا جزء مهم عليك أن تفهمه. أولاً ، دعنا نناقش ثلاث قيم مهمة للغاية وهي بارامترات التفويض. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards. +اقتطاع مكافأة الفهرسة Indexing Reward Cut - هو جزء من المكافآت التي سيحتفظ بها المفهرس لنفسه. هذا يعني أنه إذا تم تعيينه على 100٪ ، فستحصل كمفوض على 0 كمكافآت فهرسة. إذا رأيت 80٪ في واجهة المستخدم ، فهذا يعني أنك كمفوض ، ستتلقى 20٪. ملاحظة مهمة - في بداية الشبكة ، مكافآت الفهرسة تمثل غالبية المكافآت.
- ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The - middle one is giving delegators 20%. The bottom one is giving delegators ~83%.* + المفهرس الأعلى يمنح المفوضين 90٪ من المكافآت. والمتوسط يمنح المفوضين 20٪. الأدنى يعطي المفوضين ~ 83٪.
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. +- اقتطاع رسوم الاستعلام Query Fee Cut - هذا تماما مثل اقتطاع مكافأة الفهرسة Indexing Reward Cut. ومع ذلك ، فهو مخصص بشكل خاص للعائدات على رسوم الاستعلام التي يجمعها المفهرس. وتجدر الإشارة إلى أنه في بداية الشبكة ، سيكون العائد من رسوم الاستعلام صغيرا جدا مقارنة بمكافأة الفهرسة. من المستحسن الاهتمام بالشبكة لتحديد متى ستصبح رسوم الاستعلام في الشبكة أكثر أهمية. -As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. +كما ترى ، تحتاج للكثير من التفكير لاختيار المفهرس الصحيح. هذا السبب في أننا نوصي بشدة باستكشاف The Graph Discord لتحديد من هم المفهرسون الذين يتمتعون بأفضل سمعة اجتماعية وتقنية لمكافأة المفوضين على أساس ثابت. العديد من المفهرسين نشيطون جدا في Discord ، وسيسعدهم الرد على أسئلتك. يقوم العديد منهم بالفهرسة لعدة أشهر في testnet ، ويبذلون قصارى جهدهم لمساعدة المفوضين على كسب عائد جيد ، حيث يعمل ذلك على تحسين الشبكة ونجاحها. -### Calculating delegators expected return +### حساب العائد المتوقع للمفوضين delegators -A Delegator has to consider a lot of factors when determining the return. These +يجب على المفوض النظر في الكثير من العوامل عند تحديد العائد. وهم -- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. +- يمكن للمفوض إلقاء نظرة على قدرة المفهرسين على استخدام التوكن tokens المفوضة لهم. إذا لم يقم المفهرس بتخصيص جميع التوكن المتاحة ، فإنه لا يكسب أقصى ربح يمكن أن يحققه لنفسه أو للمفوضين. +- الآن في الشبكة ، يمكن للمفهرس اختيار إغلاق المخصصة وجمع المكافآت في أي وقت بين 1 و 28 يوما. لذلك من الممكن أن يكون لدى المفهرس الكثير من المكافآت التي لم يجمعها بعد ، وبالتالي ، فإن إجمالي مكافآته منخفضة. يجب أن يؤخذ هذا في الاعتبار في الأيام الأولى. -### Considering the query fee cut and indexing fee cut +### النظر في اقتطاع رسوم الاستعلام query fee cut واقتطاع رسوم الفهرسة indexing fee cut -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: +كما هو موضح في الأقسام أعلاه ، يجب عليك اختيار مفهرس يتسم بالشفافية والصدق بشأن اقتطاع رسوم الاستعلام Query Fee Cut واقتطاع رسوم الفهرسة Indexing Fee Cuts. يجب على المفوض أيضا إلقاء نظرة على بارامتارات Cooldown time لمعرفة مقدار الوقت المتاح لديهم. بعد الانتهاء من ذلك ، من السهل إلى حد ما حساب مقدار المكافآت التي يحصل عليها المفوضون. الصيغة هي: -![Delegation Image 3](/img/Delegation-Reward-Formula.png) +![قطع مكافأة الفهرسة Indexing Reward Cut](/img/Delegation-Reward-Formula.png) -### Considering the indexers delegation pool +### النظر في أسهم تفويض المفهرسين -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: +شيء آخر يجب على المفوضين مراعاته وهو نسبة أسهم التفويض Delegation Pool التي يمتلكونها. يتم تقاسم أسهم مكافآت التفويض بالتساوي ، مع إعادة موازنة بسيطة يتم تحديدها حسب المبلغ الذي أودعه المفوض. هذا يمنح المفوض حصة من الأسهم: -![Share formula](/img/Share-Forumla.png) +![شارك الصيغة](/img/Share-Forumla.png) Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. -A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. +لذلك يمكن للمفوض أن يقوم بالحسابات لتحديد أن المفهرس الذي يقدم 20٪ للمفوضين يقدم عائدا أفضل. -### Considering the delegation capacity +### النظر في سعة التفويض -Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +شيء آخر للنظر هو سعة التفويض. حاليا نسبة التفويض تم تعيينه على 16. هذا يعني أنه إذا قام المفهرس بعمل staking ل 1،000،000 GRT ، فإن سعة التفويض الخاصة به هي 16،000،000 GRT من التوكن المفوضة التي يمكنهم استخدامها في البروتوكول. أي توكن مفوّضة تزيد عن هذا المبلغ ستخفف من جميع مكافآت المفوضين. -Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. +تخيل أن المفهرس لديه 100،000،000 GRT مفوضة ، وسعته هي فقط 16،000،000 GRT. هذا يعني أنه لا يتم استخدام 84.000.000 من توكنات GRT لكسب التوكنات. وجميع المفوضين والمفهرس يحصلون على مكافآت أقل مما يمكن أن يحصلوا عليه. -Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. +باستخدام هذه الصيغة ، يمكننا أن نرى أنه من الممكن فعليا للمفهرس الذي يقدم 20٪ فقط للمفوضين ، أن يمنح المفوضين مكافأة أفضل من المفهرس الذي يعطي 90٪ للمفوضين. -## Video guide for the network UI +## فيديو لواجهة مستخدم الشبكة -This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI. +باستخدام هذه الصيغة ، يمكننا أن نرى أنه من الممكن فعليا للمفهرس الذي يقدم 20٪ فقط للمفوضين ، أن يمنح المفوضين مكافأة أفضل من المفهرس الذي يعطي 90٪ للمفوضين.
From e4f91090000b530bcae85a5c1fbf6f34b70320e6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:48 -0500 Subject: [PATCH 232/241] New translations delegating.mdx (Korean) --- pages/ko/delegating.mdx | 73 ++++++++++++++++++++--------------------- 1 file changed, 36 insertions(+), 37 deletions(-) diff --git a/pages/ko/delegating.mdx b/pages/ko/delegating.mdx index 49c14cd8e249..20cd2496fa04 100644 --- a/pages/ko/delegating.mdx +++ b/pages/ko/delegating.mdx @@ -4,90 +4,89 @@ title: 위임하기 Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. -## Delegator Guide +## 위임자 가이드 This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: -- The risks of delegating tokens in The Graph Network -- How to calculate expected returns as a delegator -- A Video guide showing the steps to delegate in the Graph Network UI +- 더그래프 네트워크에 토큰을 위임할 때의 위험요소 +- 위임자로서의 예상 수익을 계산하는 방법 +- 더그래프 네트워크 UI에서 위임하는 절차를 보여주는 비디오 가이드 -## Delegation Risks +## 위임 위험요소 -Listed below are the main risks of being a delegator in the protocol. +아래의 리스트들은 프로토콜에서 위임자가 될 때의 주된 위험요소들입니다. -### The delegation fee +### 위임 수수료 -It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. +여러분들이 위임 행위를 할 때마다 0.5%의 요금이 부과된다는 점을 이해하는 것이 중요합니다. 이는 1000 GRT를 위임하는 경우 여러분들은 5 GRT를 자동적으로 소각하게 된다는 것을 뜻합니다. -This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. +즉, 안전을 위해서 위임자는 인덱서에 위임을 행함으로써 얻게될 수익을 계산해야 한다는 뜻입니다. 예를 들어, Delegator는 해당 위임에 대해 0.5%의 보증세를 다시 벌어들이기까지 며칠이 걸릴지 계산을 해야합니다. -### The delegation unbonding period +### 위임 해지 기간 -Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. +위임자가 위임의 해지를 원할 경우, 28일의 토큰 위임 해지 기간이 적용됩니다. 이는 그들이 28일 동안 토큰을 이전할 수 없고, 보상 또한 수령하지 못한다는 것을 의미합니다. -One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT. +또한 고려해야 할 한 가지는 위임을 위한 인덱서를 현명하게 선택하는 것입니다. 만약 여러분들이 신뢰할 수 없거나 작업을 제대로 수행하지 않는 인덱서를 선택하면 여러분들은 해당 위임의 취소를 원할 것입니다. 이 경우, 보상을 받는 기회를 잃음과 더불어, 단지 여러분의 GRT를 소각하기만 한 결과를 초래할 것입니다.
- ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day - unbonding period._ + 위임 UI에는 0.5%의 수수료 및 28일의 위임 해지 기간이 명시되어있습니다.
-### Choosing a trustworthy indexer with a fair reward payout for delegators +### 위임자들에 대한 공정한 보상 지급 규칙을 지닌 신뢰할 수 있는 인덱서 선택 -This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. +이것은 이해해야 하는 중요한 부분입니다. 먼저 위임 매개 변수라는 세 가지 매우 중요한 값에 대해 살펴보겠습니다. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards. +Indexing Reward Cut – Indexing Reward Cut은 인덱서가 스스로 가져갈 보상의 비율입니다. 즉, 100%로 설정된 경우 위임자에게 주어지는 인덱싱 보상이 0이 됩니다. 만약 UI에 80%로 표시되어 있다면, 이는 여러분은 위임자로서 20%를 받게 된다는 것을 의미합니다. 중요 참고 사항 - 네트워크 시작 부분의 인덱싱 보상이 보상의 대부분을 차지합니다.
- ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The - middle one is giving delegators 20%. The bottom one is giving delegators ~83%.* + 맨 위에 위치하는 인덱서는 위임자들에게 보상의 90%를 지급합니다. 가운데 있는 인덱서는 위임자들에게 보상의 20%를 + 지급합니다. 제일 하단의 인덱서는 위임자들에게 보상액의 83% 상당을 지급하는 인덱서입니다.
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. +- Query Fee Cut - 이는 Indexing Reward Cut과 동일하게 작동합니다. 그러나 이 값은 특별히 인덱서가 수집하는 쿼리 수수료의 반환에 사용됩니다. 네트워크 시작 시에 쿼리 수수료 수익은 인덱싱 보상에 비해 매우 적다는 점에 유의해야 합니다. 네트워크에서 쿼리 수수료가 더 중요해지기 시작할 시기를 결정하기 위해 네트워크에 주의를 기울이는 것이 좋습니다. -As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. +보시다시피 올바른 인덱서를 선택해야 하는 여러가지 고려사항이 존재합니다. 이러한 이유로 저희는 여러분들이 더그래프 디스코드 채널을 살펴보시고, 사회적 평판 및 기술적 평판을 잘 갖추고, 일관성을 기반으로 위임자들에게 보상을 지급하는 인덱서가 누구인지 확인하시기를 강력히 추천드립니다. 대부분의 인덱서는 디스코드에서 매우 활발히 활동중이며, 여러분들의 질문에 기꺼이 대답할 것입니다. 이들 중 다수는 테스트넷에서 몇 개월 동안 인덱싱 작업을 수행했으며, 네트워크의 건강과 성공을 향상시켜 위임자가 좋은 수익을 얻을 수 있도록 최선을 다하고 있습니다. -### Calculating delegators expected return +### 위임자들의 예상 수익 계산 -A Delegator has to consider a lot of factors when determining the return. These +위임자는 수익을 결정할 때 수많은 요소를 고려해야 합니다. These -- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. +- 기술적인 위임자들은 해당 인덱서가 그들에게 위임되어 사용 가능한 토큰을 올바르게 사용할 수 있는 능력을 갖추었는지를 볼 수 있습니다. 만약 인덱서들이 그들이 위임할 수 있는 모든 토큰을 할당하지 않는다면, 그들 자신 및 위임자들을 위한 최대 수익을 창출할 수 없습니다. +- 현재 네트워크에서 인덱서는 보상들을 수집하고, 할당을 닫는 기간을 1일에서 28일 사이의 기간으로 언제든지 선택할 수 있습니다 따라서 어떤 인덱서는 아직 수집하지 않은 보상이 많을 수도 있으며, 이로인해 그들의 총 보상이 낮을 수 있습니다. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. 이는 초기 며칠 동안에는 반드시 고려해야할 사항입니다. -### Considering the query fee cut and indexing fee cut +### Query fee cut 및 Indexing fee cut에 대한 고려 -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: +위의 섹션에서 설명한 대로, 여러분들은 Query Fee Cut 및 Indexing Fee Cuts에 대해 투명하고 정직한 인덱서들을 선택해야합니다. 또한 위임자는 Parameters Cooldown 시간을 확인하여, 그들의 쿨다운 시간으로 인해 얼마나 많은 지연 시간이 존재하는지 확인해야합니다. 그렇게 한 후, 위임자들은 매우 쉽게 수령 리워드 총액을 계산 할 수 있습니다. 공식은 다음과 같습니다: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) -### Considering the indexers delegation pool +### 위임자들의 위임 풀에 대한 고려 -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: +위임자들이 고려해야 할 또 다른 사항은 그들의 소유하고 있는 위임 풀의 비율입니다. 모든 위임 보상은 균등하게 공유되며, 단순하게 위임자가 풀에 입금한 양으로 풀의 균형을 재조정합니다. 다음과 같이 위임자에게 풀의 지분이 주어집니다. ![Share formula](/img/Share-Forumla.png) -Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. +따라서 위임자는 이러한 계산을 통해 위임자에게 20%를 제공하는 해당 인덱서가 더 나은 보상을 제공한다는 것을 결정할 수 있습니다. A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. -### Considering the delegation capacity +### 위임 수용력에 대한 고려 -Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +또 다른 고려사항은 위임 수용력입니다. 현재 위임 비율은 16으로 설정되어 있습니다. 만약 어떠한 인덱서가 1,000,000 GRT를 스테이킹 한 경우 프로토콜에서 그들이 사용할 수 있는 위임 토큰의 위임 수용력 수량은 16,000,000GRT입니다. 이 금액 이상의 위임된 토큰은 모든 위임자의 보상을 희석시킵니다. -Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. +만약 어떠한 인덱서에 위임된 GRT가 100,000,000개이고 수용력은 16,000,000 GRT에 불과하다고 가정해 보십시오. 이는 사실상 84,000,000개의 GRT 토큰이 실제로 토큰을 얻기 위해 사용되지 않고 있음을 의미합니다. 그리고 모든 위임자들과 인덱서는 실제 그들이 받을 수 있는 보상 보다 훨씬 적은 보상을 받고 있는 것입니다. -Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. +이 공식을 사용하여, 우리는 실제로 위임자에게 20%만 제공하는 인덱서가 실제로 위임자에게 90%를 주는 인덱서보다 훨씬 더 나은 보상을 제공하는 것이 가능하다는 것을 알 수 있습니다. -## Video guide for the network UI +## 네트워크 UI를 위한 비디오 가이드 -This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI. +이 공식을 사용하여, 우리는 실제로 위임자에게 20%만 제공하는 인덱서가 실제로 위임자에게 90%를 주는 인덱서보다 훨씬 더 나은 보상을 제공하는 것이 가능하다는 것을 알 수 있습니다.
From f7013ade28d29e0a8fc66bd1908cd9b869aee5b4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:49 -0500 Subject: [PATCH 233/241] New translations explorer.mdx (Vietnamese) --- pages/vi/explorer.mdx | 204 +++++++++++++++++++++--------------------- 1 file changed, 102 insertions(+), 102 deletions(-) diff --git a/pages/vi/explorer.mdx b/pages/vi/explorer.mdx index c8df28cfe03f..fef6a2b6a34b 100644 --- a/pages/vi/explorer.mdx +++ b/pages/vi/explorer.mdx @@ -2,13 +2,13 @@ title: The Graph Explorer --- -Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below): +Chào mừng bạn đến với Graph Explorer, hay như chúng tôi thường gọi, cổng thông tin phi tập trung của bạn vào thế giới subgraphs và dữ liệu mạng. 👩🏽‍🚀 Graph Explorer bao gồm nhiều phần để bạn có thể tương tác với các nhà phát triển subgraph khác, nhà phát triển dapp, Curators, Indexers, và Delegators. Để biết tổng quan chung về Graph Explorer, hãy xem video bên dưới (hoặc tiếp tục đọc bên dưới):
@@ -16,196 +16,196 @@ Welcome to the Graph Explorer, or as we like to call it, your decentralized port ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. +Điều đầu tiên, nếu bạn vừa hoàn thành việc triển khai và xuất bản subgraph của mình trong Subgraph Studio, thì tab Subgraphs ở trên cùng của thanh điều hướng là nơi để xem các subgraph đã hoàn thành của riêng bạn (và các subgraph của những người khác) trên mạng phi tập trung. Tại đây, bạn sẽ có thể tìm thấy chính xác subgraph mà bạn đang tìm kiếm dựa trên ngày tạo, lượng tín hiệu hoặc tên. ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +Khi bạn nhấp vào một subgraph, bạn sẽ có thể thử các truy vấn trong playground và có thể tận dụng chi tiết mạng để đưa ra quyết định sáng suốt. Bạn cũng sẽ có thể báo hiệu GRT trên subgraph của riêng bạn hoặc các subgraph của người khác để làm cho các indexer nhận thức được tầm quan trọng và chất lượng của nó. Điều này rất quan trọng vì việc báo hiệu trên một subgraph khuyến khích nó được lập chỉ mục, có nghĩa là nó sẽ xuất hiện trên mạng để cuối cùng phục vụ các truy vấn. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +Trên trang chuyên dụng của mỗi subgraph, một số chi tiết được hiển thị. Bao gồm: -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- Báo hiệu / Hủy báo hiệu trên subgraph +- Xem thêm chi tiết như biểu đồ, ID triển khai hiện tại và siêu dữ liệu khác +- Chuyển đổi giữa các phiên bản để khám phá các lần bản trước đây của subgraph +- Truy vấn subgraph qua GraphQL +- Thử subgraph trong playground +- Xem các Indexers đang lập chỉ mục trên một subgraph nhất định +- Thống kê Subgraph (phân bổ, Curators, v.v.) +- Xem pháp nhân đã xuất bản subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## Participants +## Những người tham gia -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. +Trong tab này, bạn sẽ có được cái nhìn tổng thể về tất cả những người đang tham gia vào các hoạt động mạng, chẳng hạn như Indexers, Delegators, và Curators. Dưới đây, chúng tôi sẽ đi vào đánh giá sâu về ý nghĩa của mỗi tab đối với bạn. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Hãy bắt đầu với Indexers (Người lập chỉ mục). Các Indexers là xương sống của giao thức, là những người đóng góp vào các subgraph, lập chỉ mục chúng và phục vụ các truy vấn cho bất kỳ ai sử dụng subgraph. Trong bảng Indexers, bạn sẽ có thể thấy các thông số ủy quyền của Indexer, lượng stake của họ, số lượng họ đã stake cho mỗi subgraph và doanh thu mà họ đã kiếm được từ phí truy vấn và phần thưởng indexing. Đi sâu hơn: -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- Phần Cắt Phí Truy vấn - là % hoàn phí truy vấn mà Indexer giữ lại khi ăn chia với Delegators +- Phần Cắt Thưởng Hiệu quả - phần thưởng indexing được áp dụng cho nhóm ủy quyền (delegation pool). Nếu là âm, điều đó có nghĩa là Indexer đang cho đi một phần phần thưởng của họ. Nếu là dương, điều đó có nghĩa là Indexer đang giữ lại một số phần thưởng của họ +- Cooldown Remaining (Thời gian chờ còn lại) - thời gian còn lại cho đến khi Indexer có thể thay đổi các thông số ủy quyền ở trên. Thời gian chờ Cooldown được Indexers thiết lập khi họ cập nhật thông số ủy quyền của mình +- Được sở hữu - Đây là tiền stake Indexer đã nạp vào, có thể bị phạt cắt giảm (slashed) nếu có hành vi độc hại hoặc không chính xác +- Được ủy quyền - Lượng stake từ các Delegator có thể được Indexer phân bổ, nhưng không thể bị phạt cắt giảm +- Được phân bổ - phần stake mà Indexers đang tích cực phân bổ cho các subgraph mà họ đang lập chỉ mục +- Năng lực Ủy quyền khả dụng - số token stake được ủy quyền mà Indexers vẫn có thể nhận được trước khi họ trở nên ủy quyền quá mức (overdelegated) +- Max Delegation Capacity (Năng lực Ủy quyền Tối đa) - số tiền token stake được ủy quyền tối đa mà Indexer có thể chấp nhận một cách hiệu quả. Số tiền stake được ủy quyền vượt quá con số này sẽ không thể được sử dụng để phân bổ hoặc tính toán phần thưởng. +- Phí Truy vấn - đây là tổng số phí mà người dùng cuối đã trả cho các truy vấn từ Indexer đến hiện tại +- Thưởng Indexer - đây là tổng phần thưởng indexer mà Indexer và các Delegator của họ kiếm được cho đến hiện tại. Phần thưởng Indexer được trả thông qua việc phát hành GRT. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers có thể kiếm được cả phí truy vấn và phần thưởng indexing. Về mặt chức năng, điều này xảy ra khi những người tham gia mạng ủy quyền GRT cho Indexer. Điều này cho phép Indexers nhận phí truy vấn và phần thưởng tùy thuộc vào thông số Indexer của họ. Các thông số Indexing được cài đặt bằng cách nhấp vào phía bên phải của bảng hoặc bằng cách truy cập hồ sơ của Indexer và nhấp vào nút “Ủy quyền”. -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Để tìm hiểu thêm về cách trở thành một Indexer, bạn có thể xem qua [tài liệu chính thức](/indexing) hoặc [Hướng dẫn về Indexer của Học viện The Graph.](https://thegraph.academy/delegators/choosing-indexers/) ![Indexing details pane](/img/Indexing-Details-Pane.png) ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators (Người Giám tuyển) phân tích các subgraph để xác định subgraph nào có chất lượng cao nhất. Một khi Curator tìm thấy một subgraph có khả năng hấp dẫn, họ có thể curate nó bằng cách báo hiệu trên đường cong liên kết (bonding curve) của nó. Khi làm như vậy, Curator sẽ cho Indexer biết những subgraph nào có chất lượng cao và nên được lập chỉ mục. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +Curators có thể là các thành viên cộng đồng, người tiêu dùng dữ liệu hoặc thậm chí là nhà phát triển subgraph, những người báo hiệu trên subgraph của chính họ bằng cách nạp token GRT vào một đường cong liên kết. Bằng cách nạp GRT, Curator đúc ra cổ phần curation của một subgraph. Kết quả là, Curators có đủ điều kiện để kiếm một phần phí truy vấn mà subgraph mà họ đã báo hiệu tạo ra. Đường cong liên kết khuyến khích Curators quản lý các nguồn dữ liệu chất lượng cao nhất. Bảng Curator trong phần này sẽ cho phép bạn xem: -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- Ngày Curator bắt đầu curate +- Số GRT đã được nạp +- Số cổ phần một Curator sở hữu ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) +Nếu muốn tìm hiểu thêm về vai trò Curator, bạn có thể thực hiện việc này bằng cách truy cập các liên kết sau của [Học viện The Graph](https://thegraph.academy/curators/) hoặc [tài liệu chính thức.](/curating) ### 3. Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Delegators (Người Ủy quyền) đóng một vai trò quan trọng trong việc duy trì tính bảo mật và phân quyền của Mạng The Graph. Họ tham gia vào mạng bằng cách ủy quyền (tức là "staking") token GRT cho một hoặc nhiều indexer. Không có những Delegator, các Indexer ít có khả năng kiếm được phần thưởng và phí đáng kể. Do đó, Indexer tìm cách thu hút Delegator bằng cách cung cấp cho họ một phần của phần thưởng lập chỉ mục và phí truy vấn mà họ kiếm được. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! +Delegator, đổi lại, chọn Indexer dựa trên một số biến số khác nhau, chẳng hạn như hiệu suất trong quá khứ, tỷ lệ phần thưởng lập chỉ mục và phần cắt phí truy vấn. Danh tiếng trong cộng đồng cũng có thể đóng vai trò quan trọng trong việc này! Bạn nên kết nối với những các indexer đã chọn qua[Discord của The Graph](https://thegraph.com/discord) hoặc [Forum The Graph](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +Bảng Delegators sẽ cho phép bạn xem các Delegator đang hoạt động trong cộng đồng, cũng như các chỉ số như: -- The number of Indexers a Delegator is delegating towards -- A Delegator’s original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated at +- Số lượng Indexers mà một Delegator đang ủy quyền cho +- Ủy quyền ban đầu của Delegator +- Phần thưởng họ đã tích lũy nhưng chưa rút khỏi giao thức +- Phần thưởng đã ghi nhận ra mà họ rút khỏi giao thức +- Tổng lượng GRT mà họ hiện có trong giao thức +- Ngày họ ủy quyền lần cuối cùng -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). +Nếu bạn muốn tìm hiểu thêm về cách trở thành một Delegator, đừng tìm đâu xa! Tất cả những gì bạn phải làm là đi đến [tài liệu chính thức](/delegating) hoặc [Học viện The Graph](https://docs.thegraph.academy/network/delegators). -## Network +## Mạng lưới -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +Trong phần Mạng lưới, bạn sẽ thấy các KPI toàn cầu cũng như khả năng chuyển sang cơ sở từng epoch và phân tích các chỉ số mạng chi tiết hơn. Những chi tiết này sẽ cho bạn biết mạng hoạt động như thế nào theo thời gian. -### Activity +### Hoạt động -The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +Phần hoạt động có tất cả các chỉ số mạng hiện tại cũng như một số chỉ số tích lũy theo thời gian. Ở đây bạn có thể thấy những thứ như: -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- Tổng stake mạng hiện tại +- Phần chia stake giữa Indexer và các Delegator của họ +- Tổng cung GRT, lượng được đúc và đốt kể từ khi mạng lưới thành lập +- Tổng phần thưởng Indexing kể từ khi bắt đầu giao thức +- Các thông số giao thức như phần thưởng curation, tỷ lệ lạm phát,... +- Phần thưởng và phí của epoch hiện tại -A few key details that are worth mentioning: +Một vài chi tiết quan trọng đáng được đề cập: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Phí truy vấn đại diện cho phí do người tiêu dùng tạo ra**, và chúng có thể được Indexer yêu cầu (hoặc không) sau một khoảng thời gian ít nhất 7 epochs (xem bên dưới) sau khi việc phân bổ của họ cho các subgraph đã được đóng lại và dữ liệu mà chúng cung cấp đã được người tiêu dùng xác thực. +- **Phần thưởng Indexing đại diện cho số phần thưởng mà Indexer đã yêu cầu được từ việc phát hành mạng trong epoch đó.** Mặc dù việc phát hành giao thức đã được cố định, nhưng phần thưởng chỉ nhận được sau khi Indexer đóng phân bổ của họ cho các subgraph mà họ đã lập chỉ mục. Do đó, số lượng phần thưởng theo từng epoch khác nhau (nghĩa là trong một số epoch, Indexer có thể đã đóng chung các phân bổ đã mở trong nhiều ngày). ![Explorer Image 8](/img/Network-Stats.png) ### Epochs -In the Epochs section you can analyse on a per-epoch basis, metrics such as: +Trong phần Epochs, bạn có thể phân tích trên cơ sở từng epoch, các chỉ số như: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. +- Khối bắt đầu hoặc kết thúc của Epoch +- Phí truy vấn được tạo và phần thưởng indexing được thu thập trong một epoch cụ thể +- Trạng thái Epoch, đề cập đến việc thu và phân phối phí truy vấn và có thể có các trạng thái khác nhau: + - Epoch đang hoạt động là epoch mà Indexer hiện đang phân bổ cổ phần và thu phí truy vấn + - Epoch đang giải quyết là những epoch mà các kênh trạng thái đang được giải quyết. Điều này có nghĩa là Indexers có thể bị phạt cắt giảm nếu người tiêu dùng công khai tranh chấp chống lại họ. + - Epoch đang phân phối là epoch trong đó các kênh trạng thái cho các epoch đang được giải quyết và Indexer có thể yêu cầu hoàn phí truy vấn của họ. + - Epoch được hoàn tất là những epoch không còn khoản hoàn phí truy vấn nào để Indexer yêu cầu, do đó sẽ được hoàn thiện. ![Explorer Image 9](/img/Epoch-Stats.png) -## Your User Profile +## Hồ sơ Người dùng của bạn -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Nãy giờ chúng ta đã nói về các thống kê mạng, hãy chuyển sang hồ sơ cá nhân của bạn. Hồ sơ người dùng cá nhân của bạn là nơi để bạn xem hoạt động mạng của mình, bất kể bạn đang tham gia mạng như thế nào. Ví Ethereum của bạn sẽ hoạt động như hồ sơ người dùng của bạn và với Trang Tổng quan Người dùng, bạn sẽ có thể thấy: -### Profile Overview +### Tổng quan Hồ sơ -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +Đây là nơi bạn có thể xem bất kỳ hành động hiện tại nào bạn đã thực hiện. Đây cũng là nơi bạn có thể tìm thấy thông tin hồ sơ, mô tả và trang web của mình (nếu bạn đã thêm). ![Explorer Image 10](/img/Profile-Overview.png) -### Subgraphs Tab +### Tab Subgraphs -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +Nếu bạn nhấp vào tab Subgraphs, bạn sẽ thấy các subgraph đã xuất bản của mình. Điều này sẽ không bao gồm bất kỳ subgraph nào được triển khai với CLI cho mục đích thử nghiệm - các subgraph sẽ chỉ hiển thị khi chúng được xuất bản lên mạng phi tập trung. ![Explorer Image 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### Tab Indexing -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +Nếu bạn nhấp vào tab Indexing, bạn sẽ tìm thấy một bảng với tất cả các phân bổ hiện hoạt và lịch sử cho các subgraph, cũng như các biểu đồ mà bạn có thể phân tích và xem hiệu suất trước đây của mình với tư cách là Indexer. -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +Phần này cũng sẽ bao gồm thông tin chi tiết về phần thưởng Indexer ròng của bạn và phí truy vấn ròng. Bạn sẽ thấy các số liệu sau: -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- Stake được ủy quyền - phần stake từ Delegator có thể được bạn phân bổ nhưng không thể bị phạt cắt giảm (slashed) +- Tổng Phí Truy vấn - tổng phí mà người dùng đã trả cho các truy vấn do bạn phục vụ theo thời gian +- Phần thưởng Indexer - tổng số phần thưởng Indexer bạn đã nhận được, tính bằng GRT +- Phần Cắt Phí - lượng % hoàn phí phí truy vấn mà bạn sẽ giữ lại khi ăn chia với Delegator +- Phần Cắt Thưởng - lượng % phần thưởng Indexer mà bạn sẽ giữ lại khi ăn chia với Delegator +- Được sở hữu - số stake đã nạp của bạn, có thể bị phạt cắt giảm (slashed) vì hành vi độc hại hoặc không chính xác ![Explorer Image 12](/img/Indexer-Stats.png) -### Delegating Tab +### Tab Delegating -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegator rất quan trọng đối với Mạng The Graph. Một Delegator phải sử dụng kiến thức của họ để chọn một Indexer sẽ mang lại lợi nhuận lành mạnh từ các phần thưởng. Tại đây, bạn có thể tìm thấy thông tin chi tiết về các ủy quyền đang hoạt động và trong lịch sử của mình, cùng với các chỉ số của Indexer mà bạn đã ủy quyền. -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +Trong nửa đầu của trang, bạn có thể thấy biểu đồ ủy quyền của mình, cũng như biểu đồ chỉ có phần thưởng. Ở bên trái, bạn có thể thấy các KPI phản ánh các chỉ số ủy quyền hiện tại của bạn. -The Delegator metrics you’ll see here in this tab include: +Các chỉ số Delegator mà bạn sẽ thấy ở đây trong tab này bao gồm: -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- Tổng pphần thưởng ủy quyền +- Tổng số phần thưởng chưa ghi nhận +- Tổng số phần thưởng đã ghi được -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +Trong nửa sau của trang, bạn có bảng ủy quyền. Tại đây, bạn có thể thấy các Indexer mà bạn đã ủy quyền, cũng như thông tin chi tiết của chúng (chẳng hạn như phần cắt thưởng, thời gian chờ, v.v.). -With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. +Với các nút ở bên phải của bảng, bạn có thể quản lý ủy quyền của mình - ủy quyền nhiều hơn, hủy bỏ hoặc rút lại ủy quyền của bạn sau khoảng thời gian rã đông (thawing period). -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +Lưu ý rằng biểu đồ này có thể cuộn theo chiều ngang, vì vậy nếu bạn cuộn hết cỡ sang bên phải, bạn cũng có thể thấy trạng thái ủy quyền của mình (ủy quyền, hủy ủy quyền, có thể rút lại). ![Explorer Image 13](/img/Delegation-Stats.png) -### Curating Tab +### Tab Curating -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +Trong tab Curation, bạn sẽ tìm thấy tất cả các subgraph mà bạn đang báo hiệu (do đó cho phép bạn nhận phí truy vấn). Báo hiệu cho phép Curator đánh dấu cho Indexer biết những subgraph nào có giá trị và đáng tin cậy, do đó báo hiệu rằng chúng cần được lập chỉ mục. -Within this tab, you’ll find an overview of: +Trong tab này, bạn sẽ tìm thấy tổng quan về: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph -- Updated at date details +- Tất cả các subgraph bạn đang quản lý với các chi tiết về tín hiệu +- Tổng cổ phần trên mỗi subgraph +- Phần thưởng truy vấn cho mỗi subgraph +- Chi tiết ngày được cập nhật ![Explorer Image 14](/img/Curation-Stats.png) -## Your Profile Settings +## Cài đặt Hồ sơ của bạn -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +Trong hồ sơ người dùng của mình, bạn sẽ có thể quản lý chi tiết hồ sơ cá nhân của mình (như thiết lập tên ENS). Nếu bạn là Indexer, bạn thậm chí có nhiều quyền truy cập hơn vào các cài đặt trong tầm tay của mình. Trong hồ sơ người dùng của mình, bạn sẽ có thể thiết lập các tham số ủy quyền và operator của mình. -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- Operators (Người vận hành) thực hiện các hành động được hạn chế trong giao thức thay mặt cho Indexer, chẳng hạn như mở và đóng phân bổ. Operators thường là các địa chỉ Ethereum khác, tách biệt với ví đặt staking của họ, với quyền truy cập được kiểm soát vào mạng mà Indexer có thể cài đặt cá nhân +- Tham số ủy quyền cho phép bạn kiểm soát việc phân phối GRT giữa bạn và các Delegator của bạn. ![Explorer Image 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +Là cổng thông tin chính thức của bạn vào thế giới dữ liệu phi tập trung, Graph Explorer cho phép bạn thực hiện nhiều hành động khác nhau, bất kể vai trò của bạn trong mạng. Bạn có thể truy cập cài đặt hồ sơ của mình bằng cách mở menu thả xuống bên cạnh địa chỉ của bạn, sau đó nhấp vào nút Cài đặt.
![Wallet details](/img/Wallet-Details.png)
From 1261dd04cc99f3327813b00eb8f89cfdea985049 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:50 -0500 Subject: [PATCH 234/241] New translations delegating.mdx (Chinese Simplified) --- pages/zh/delegating.mdx | 76 ++++++++++++++++++++--------------------- 1 file changed, 37 insertions(+), 39 deletions(-) diff --git a/pages/zh/delegating.mdx b/pages/zh/delegating.mdx index 217c80e3f9ff..8ba0e39c9035 100644 --- a/pages/zh/delegating.mdx +++ b/pages/zh/delegating.mdx @@ -2,86 +2,84 @@ title: 委托 --- -Delegators cannot be slashed for bad behavior, but there is a deposit tax on Delegators to disincentivize poor decision making that could harm the integrity of the network. +委托人不能因为不良行为而被取消,但对委托有存款税,以抑制可能损害网络完整性的不良决策。 -## Delegator Guide +## 委托人指南 -This guide will explain how to be an effective delegator in the Graph Network. Delegators share earnings of the protocol alongside all indexers on their delegated stake. A Delegator must use their best judgement to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide: +本指南将解释如何在Graph网络中成为一个有效的委托人。 委托人与所有索引人一起分享其委托股权的协议收益。 委托人必须根据多种因素,运用他们的最佳判断力来选择索引人。 请注意,本指南将不涉及正确设置Metamask等步骤,因为这些信息在互联网上广泛存在。 本指南有三个部分: -- The risks of delegating tokens in The Graph Network -- How to calculate expected returns as a delegator -- A Video guide showing the steps to delegate in the Graph Network UI +- 在 The Graph 网络中委托代币的风险 +- 如何计算作为委托人的预期回报 +- 展示在 The Graph 网络界面中进行委托步骤的视频指南 -## Delegation Risks +## 委托风险 -Listed below are the main risks of being a delegator in the protocol. +下面列出了作为议定书中的委托人的主要风险。 -### The delegation fee +### 委托费用 -It is important to understand that every time you delegate, you will be charged 0.5%. This means if you are delegating 1000 GRT, you will automatically burn 5 GRT. +重要的是要了解每次委托时,您将被收取 0.5% 的费用。 这意味着如果您委托 1000 GRT,您将自动销毁 5 GRT。 -This means that to be safe, a Delegator should calculate what their return will be by delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% deposit tax on their delegation. +这意味着为了安全起见,委托人应该通过委托给索引人来计算他们的回报。 例如,委托人可能会计算他们需要多少天才能收回其委托的 0.5% 存款税。 -### The delegation unbonding period +### 委托解约期 -Whenever a Delegator wants to undelegate, their tokens are subject to a 28 day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days. +每当委托人想要解除委托时,他们的代币都有 28 天的解除绑定期。 这意味着他们在 28 天内不能转移他们的代币,也不能获得任何奖励。 -One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunity to earn rewards, which can be just as bad as burning GRT. +还需要考虑的一件事是明智地选择索引人。 如果您选择了一个不值得信赖的 索引人,或者没有做好工作,您将想要取消委托,这意味着您将失去很多获得奖励的机会,这可能与燃烧 GRT 一样糟糕。
- ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day - unbonding period._ + 请注意委托用户界面中的0.5%费用,以及28天的解约期。
-### Choosing a trustworthy indexer with a fair reward payout for delegators +### 选择一个为委托人提供公平的奖励分配的值得信赖的索引人 -This is an important part to understand. First let's discuss three very important values, which are the Delegation Parameters. +这是需要理解的重要部分。 首先让我们讨论三个非常重要的值,即委托参数。 -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the indexer will keep for themselves. That means, if it is set to 100%, as a delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a delegator, you will receive 20%. An important note - in the beginning of the network, Indexing Rewards will account for the majority of the rewards. +索引奖励分成- 索引奖励分成是指索引人将为自己保留的那部分奖励。 这意味着,如果它被设置为 100%,作为一个委托人,你将获得 0 个索引奖励。 如果你在 UI 中看到 80%,这意味着作为委托人,你将获得 20%。 一个重要的说明 -在网络的初期,索引奖励将占奖励的大部分比重。
- ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top indexer is giving delegators 90% of the rewards. The - middle one is giving delegators 20%. The bottom one is giving delegators ~83%.* + 面的索引人分给委托人 90% 的收益。 中间的给委托人 20%。 下面的给委托人约 83%。
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this is specifically for returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended to pay attention to the network to determine when the query fees in the network will start to be more significant. +- 查询费分成-这与索引奖励分成的运作方式完全相同。 不过,这是专门针对索引人收取的查询费的回报。 需要注意的是,在网络初期,查询费的回报与索引奖励相比会非常小。 建议关注网络来确定网络中的查询费何时开始变的比较可观。 -As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward delegators on a consistent basis. Many of the Indexers are very active in Discord, and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help delegators earn a good return, as it improves the health and success of the network. +正如您所看到的,在选择合适的索引人时必须要考虑很多。 这就是为什么我们强烈建议您探索 The Graph Discord,以确定哪些是具有最佳社会声誉和技术声誉的索引人,并以持续的方式奖励委托人。 许多索引人在 Discord 中非常活跃,他们将很乐意回答您的问题。 他们中的许多人已经在测试网中做了几个月的索引人,并且正在尽最大努力帮助委托人们赚取良好的回报,因为如此可以增进网络的健康运行和成功。 -### Calculating delegators expected return +### 计算委托人的预期收益 -A Delegator has to consider a lot of factors when determining the return. These +委托人在确定收益时必须考虑很多因素。 这些因素解释如下 : -- A technical Delegator can also look at the Indexers ability to use the Delegated tokens available to them. If an indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So it is possible that an Indexer has a lot of rewards they have not collected yet, and thus, their total rewards are low. This should be taken into consideration in the early days. +- 有技术的委托人还可以查看索引人使用他们可用的委托代币的能力。 如果索引人没有分配所有可用的代币,他们就不会为自己或他们的委托人赚取最大利润。 +- 现在在网络中,索引人可以选择关闭分配并在 1 到 28 天之间的任何时间收集奖励。 因此,索引人可能有很多尚未收集的奖励,因此他们的总奖励很低。 早期应该考虑到这一点。 -### Considering the query fee cut and indexing fee cut +### 考虑到查询费用的分成和索引费用的分成 -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the delegators are getting. The formula is: +如上文所述,你应该选择一个在设置他们的查询费分成和索引奖励分成方面透明和诚实的索引人。 委托人还应该看一下参数冷却时间,看看他们有多少时间缓冲区。 做完这些之后,计算委托人会获得的奖励金额就相当简单了。 计算公式是: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) -### Considering the indexers delegation pool +### 考虑索引人委托池 -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the delegator a share of the pool: +委托人必须考虑的另一件事是他们拥有的委托池的比例。 所有的委托奖励都是平均分配的,根据委托人存入池子的数额来决定池子的简单再平衡。 这使委托人就拥有了委托池的份额: ![Share formula](/img/Share-Forumla.png) -Using this formula, we can see that it is actually possible for an indexer who is offering only 20% to delegators, to actually be giving delegators an even better reward than an Indexer who is giving 90% to delegators. +因此,委托人可以进行数学计算,以确定向委托人提供 20% 的索引人提供了更好的回报。 -A delegator can therefore do the math to determine that the Indexer offering 20% to delegators, is offering a better return. +因此,委托人可以进行数学计算,以确定向委托人提供 20% 的 索引人提供了更好的回报。 -### Considering the delegation capacity +### 考虑委托容量 -Another thing to consider is the delegation capacity. Currently the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +另一个需要考虑的是委托容量。 目前,委托比例被设置为 16。 这意味着,如果一个索引人质押了 1,000,000 GRT,他们的委托容量是 16,000,000 GRT 的委托令牌,他们可以在协议中使用。 任何超过这个数量的委托令牌将稀释所有的委托人奖励。 -Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards that they could be. +想象一下,一个索引人有 100,000,000 GRT 委托给他们,而他们的能力只有 16,000,000 GRT。 这意味着实际上,84,000,000 GRT 令牌没有被用来赚取令牌。 而所有的委托人,以及索引人,赚取的奖励也远远低于他们可以赚取的。 -Therefore a delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. +使用这个公式,我们可以看到实际上只向委托人提供 20%的索引人比给索引人提供 90%的索引人实际上给予委托人更好的奖励。 -## Video guide for the network UI +## 网络界面视频指南 -This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI. +使用这个公式,我们可以看到实际上只向委托人提供 20%的索引人比给索引人提供 90%的索引人实际上给予委托人更好的奖励。
From fecb1bce6532141721397e42a200423201dd3e12 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:52 -0500 Subject: [PATCH 236/241] New translations explorer.mdx (Spanish) --- pages/es/explorer.mdx | 244 +++++++++++++++++++++--------------------- 1 file changed, 122 insertions(+), 122 deletions(-) diff --git a/pages/es/explorer.mdx b/pages/es/explorer.mdx index c8df28cfe03f..6ede1f9592e3 100644 --- a/pages/es/explorer.mdx +++ b/pages/es/explorer.mdx @@ -2,210 +2,210 @@ title: The Graph Explorer --- -Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below): +Bienvenido al explorador de The Graph, o como nos gusta llamarlo, tu portal descentralizado al mundo de los subgrafos y los datos de la red. 👩🏽‍🚀 Este explorador de The Graph consta de varias partes en las que puedes interactuar con otros desarrolladores de subgrafos, desarrolladores de dApp, Curadores, Indexadores y Delegadores. Para obtener una descripción general de The Graph Explorer, échale un vistazo al siguiente video (o sigue leyendo lo que hemos escrito para ti):
-## Subgraphs +## Subgrafos -First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. +Vamos primero por lo más importante, si acabas de terminar de implementar y publicar tu subgrafo en el Subgraph Studio, la pestaña Subgrafos en la parte superior de la barra de navegación es el lugar para ver tus propios subgrafos terminados (y los subgrafos de otros) en la red descentralizada. Aquí podrás encontrar el subgrafo exacto que estás buscando según la fecha de creación, el monto de señalización o el nombre que le han asignado. -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Imagen de Explorer 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +Cuando hagas clic en un subgrafo, podrás probar consultas en el playground y podrás aprovechar los detalles de la red para tomar decisiones informadas. También podrás señalar GRT en tu propio subgrafo o en los subgrafos de otros para que los indexadores sean conscientes de su importancia y calidad. Esto es fundamental porque señalar en un subgrafo incentiva su indexación, lo que significa que saldrá a la luz en la red para eventualmente entregar consultas. -![Explorer Image 2](/img/Subgraph-Details.png) +![Imagen de Explorer 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +En la página de cada subgrafo, aparecen varios detalles. Entre ellos se incluyen: -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- Señalar/dejar de señalar un subgrafo +- Ver más detalles como gráficos, ID de implementación actual y otros metadatos +- Cambiar de versión para explorar iteraciones pasadas del subgrafo +- Consultar subgrafos a través de GraphQL +- Probar subgrafos en el playground +- Ver los Indexadores que están indexando en un subgrafo determinado +- Estadísticas de subgrafo (asignaciones, Curadores, etc.) +- Ver la entidad que publicó el subgrafo -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Imagen de Explorer 3](/img/Explorer-Signal-Unsignal.png) -## Participants +## Participantes -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. +Dentro de esta pestaña, obtendrás una vista panorámica de todas las personas que participan en las actividades de la red, como Indexadores, Delegadores y Curadores. A continuación, analizaremos en profundidad lo que significa cada pestaña para ti. -### 1. Indexers +### 1. Indexadores -![Explorer Image 4](/img/Indexer-Pane.png) +![Imagen de Explorer 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Comencemos con los Indexadores. Los Indexadores son la columna vertebral del protocolo, ya que son los que stakean en los subgrafos, los indexan y proveen consultas a cualquiera que consuma subgrafos. En la tabla de Indexadores, podrás ver los parámetros de delegación de un Indexador, su participación, cuánto han stakeado en cada subgrafo y cuántos ingresos han obtenido por las tarifas de consulta y las recompensas de indexación. Profundizaremos un poco más a continuación: -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- Query Fee Cut: es el porcentaje de los reembolsos obtenidos por la tarifa de consulta que el Indexador conserva cuando se divide con los Delegadores +- Effective Reward Cut: es el recorte de recompensas por indexación que se aplica al pool de delegación. Si es negativo, significa que el Indexador está regalando parte de sus beneficios. Si es positivo, significa que el Indexador se queda con alguno de tus beneficios +- Cooldown Remaining: el tiempo restante que le permitirá al Indexador cambiar los parámetros de delegación. Los plazos de configuración son ajustados por los Indexadores cuando ellos actualizan sus parámetros de delegación +- Owned: esta es la participación (o el stake) depositado por el Indexador, la cual puede reducirse por su mal comportamiento +- Delegated: participación de los Delegadores que puede ser asignada por el Indexador, pero que no se puede recortar +- Allocated: es el stake que los indexadores están asignando activamente a los subgrafos que están indexando +- Available Delegation Capacity: la cantidad de participación delegada que los indexadores aún pueden recibir antes de que se sobredeleguen +- Max Delegation Capacity: la cantidad máxima de participación delegada que el Indexador puede aceptar de manera productiva. Cuando se excede parte del stake en la delegación, estos no contarán para las asignaciones o recompensas. +- Query Fees: estas son las tarifas totales que los usuarios (clientes) han pagado por todas las consultas de un Indexador +- Indexer Rewards: este es el total de recompensas del Indexador obtenidas por el Indexador y sus Delegadores durante todo el tiempo que trabajaron en conjunto. Las recompensas de los Indexadores se pagan mediante la emisión de GRT. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Los Indexadores pueden ganar tanto tarifas de consulta como recompensas de indexación. Funcionalmente, esto sucede cuando los participantes de la red delegan GRT a un Indexador. Esto permite a los Indexadores recibir tarifas de consulta y recompensas en función de sus parámetros como Indexador. Los parámetros de Indexación se establecen haciendo clic en el lado derecho de la tabla o entrando en el perfil de un Indexador y haciendo clic en el botón "Delegate". -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Para obtener más información sobre cómo convertirse en Indexador, puedes consultar la [documentación oficial](/indexing) o \[Guías del Indexador de The Graph Academy.\](https://thegraph.academy/delegators/choosing- indexers/) -![Indexing details pane](/img/Indexing-Details-Pane.png) +![Panel de detalles de indexación](/img/Indexing-Details-Pane.png) -### 2. Curators +### 2. Curadores -Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Los Curadores analizan los subgrafos para identificar qué subgrafos son de la más alta calidad. Una vez que un Curador ha encontrado un subgrafo potencialmente atractivo, puede curarlo señalándolo en su curva de vinculación. Al hacerlo, los Curadores informan a los Indexadores qué subgrafos son de alta calidad y necesitan ser indexados. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +Los Curadores pueden ser miembros de la comunidad, consumidores de datos o incluso desarrolladores de subgrafos que señalan en sus propios subgrafos depositando tokens GRT en una curva de vinculación. Al depositar GRT, los Curadores anclan sus participaciones como curadores de un subgrafo. Como resultado, los Curadores son elegibles para ganar una parte de las tarifas de consulta que genera el subgrafo que han señalado. La curva de vinculación incentiva a los Curadores a curar fuentes de datos de la más alta calidad. La tabla de Curador en esta sección te permitirá ver: -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- La fecha en que el Curador comenzó a curar +- El número de GRT que se depositaron +- El número de participaciones que posee un Curador -![Explorer Image 6](/img/Curation-Overview.png) +![Imagen de Explorer 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) +Si deseas obtener más información sobre la función de un Curador, puedes hacerlo visitando los siguientes enlaces de [The Graph Academy](https://thegraph.academy/curators/) o [ documentación oficial.](/curating) -### 3. Delegators +### 3. Delegadores -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Los Delegadores juegan un rol esencial en la seguridad y descentralización que conforman la red de The Graph. Participan en la red delegando (es decir, "stakeadon") tokens GRT a uno o varios Indexadores. Sin Delegadores, es menos probable que los Indexadores obtengan recompensas y tarifas significativas. Por lo tanto, los Indexadores buscan atraer Delegadores ofreciéndoles una parte de las recompensas de indexación y las tarifas de consulta que ganan. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! +Los Delegadores, a su vez, seleccionan a los Indexadores en función de una serie de diferentes parámetros, como el rendimiento que tenía ese indexador, las tasas de recompensa por indexación y los recortes compartidos de las tarifas de consulta. ¡La reputación dentro de la comunidad también puede influir en esto! Se recomienda conectarse con los Indexadores seleccionados a través del [Discord de The Graph](https://thegraph.com/discord) o el [¡Foro de The Graph](https://forum.thegraph.com/)! -![Explorer Image 7](/img/Delegation-Overview.png) +![Imagen de Explorer 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +La tabla de Delegadores te permitirá ver los Delegadores activos en la comunidad, así como las siguientes métricas: -- The number of Indexers a Delegator is delegating towards -- A Delegator’s original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated at +- El número de Indexadores a los que delega este Delegador +- La delegación principal de un Delegador +- Las recompensas que han ido acumulando, pero que aún no han retirado del protocolo +- Las recompensas realizadas, es decir, las que ya retiraron del protocolo +- Cantidad total de GRT que tienen actualmente dentro del protocolo +- La fecha en la que delegaron por última vez -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). +Si deseas obtener más información sobre cómo convertirte en un Delegador, ¡No busques más! Todo lo que tienes que hacer es dirigirte a la [documentación oficial](/delegating) o [The Graph Academy](https://docs.thegraph.academy/network/delegators). -## Network +## Red (network) -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +En la sección Network (red), verás los KPI globales, así como la capacidad de cambiar a una base por ciclo y analizar las métricas de la red con más detalle. Estos detalles te darán una idea de cómo se está desempeñando la red a lo largo del tiempo. -### Activity +### Actividad (activity) -The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +La sección actividad tiene todas las métricas de red actuales, así como algunas métricas acumulativas a lo largo del tiempo. Aquí puedes ver cosas como: -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- La cantidad total de stake que circula en estos momentos +- La participación que se divide entre los Indexadores y sus Delegadores +- Suministro total, GRT anclados y quemados desde el comienzo de la red +- Recompensas totales de Indexación desde el comienzo del protocolo +- Parámetros del protocolo como las recompensas de curación, tasa de inflación y más +- Recompensas y tarifas del ciclo actual -A few key details that are worth mentioning: +Algunos detalles clave que vale la pena mencionar: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Las tarifas de consulta representan las tarifas generadas por los consumidores**, y que pueden ser reclamadas (o no) por los Indexadores después de un período de al menos 7 ciclos (ver más abajo) después de que se han cerrado las asignaciones hacia los subgrafos y los datos que servían han sido validados por los consumidores. +- **Las recompensas de indexación representan la cantidad de recompensas que los Indexadores reclamaron por la emisión de la red durante el ciclo.** Aunque la emisión del protocolo es fija, las recompensas solo se anclan una vez que los Indexadores cierran sus asignaciones hacia los subgrafos que han indexado. Por lo tanto, el número de recompensas por ciclo suele variar (es decir, durante algunos ciclos, es posible que los Indexadores hayan cerrado colectivamente asignaciones que han estado abiertas durante muchos días). -![Explorer Image 8](/img/Network-Stats.png) +![Imagen de Explorer 8](/img/Network-Stats.png) -### Epochs +### Ciclos (epoch) -In the Epochs section you can analyse on a per-epoch basis, metrics such as: +En la sección de ciclos puedes analizar diferentes métricas por cada ciclo, tales como: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. +- Inicio de ciclo o bloque final +- Tarifas de consulta generadas y recompensas de indexación recolectadas durante un ciclo específico +- Estado del ciclo, el cual se refiere al cobro y distribución de la tarifa de consulta y puede tener diferentes estados: + - El ciclo activo es aquel en la que los indexadores actualmente asignan su participación (staking) y cobran tarifas por consultas + - Los ciclos liquidados son aquellos en los que ya se han liquidado las recompensas y demás métricas. Esto significa que los Indexadores están sujetos a recortes si los consumidores abren disputas en su contra. + - Los ciclos de distribución son los ciclos en los que los canales correspondiente a los ciclos son establecidos y los Indexadores pueden reclamar sus reembolsos correspondientes a las tarifas de consulta. + - Los ciclos finalizados son los ciclos que no tienen reembolsos en cuanto a las tarifas de consulta, estos son reclamados por parte de los Indexadores, por lo que estos se consideran como finalizados. -![Explorer Image 9](/img/Epoch-Stats.png) +![Imagen de Explorer 9](/img/Epoch-Stats.png) -## Your User Profile +## Tu perfil de usuario -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Ahora que hemos hablado de las estadísticas de la red, pasemos a tu perfil personal. Tu perfil personal es el lugar donde puedes ver tu actividad personal dentro de la red, sin importar cómo estés participando en la red. Tu billetera Ethereum actuará como tu perfil de usuario y desde tu panel de usuario (dashboard) podrás ver lo siguiente: -### Profile Overview +### Información general del perfil -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +Aquí es donde puedes ver las acciones actuales que realizaste. Aquí también podrás encontrar la información de tu perfil, la descripción y el sitio web (si agregaste uno). -![Explorer Image 10](/img/Profile-Overview.png) +![Imagen de Explorer 10](/img/Profile-Overview.png) -### Subgraphs Tab +### Pestaña de subgrafos -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +Si haces clic en la pestaña subgrafos, verás tus subgrafos publicados. Esto no incluirá ningún subgrafo implementado con la modalidad de CLI o con fines de prueba; los subgrafos solo aparecerán cuando se publiquen en la red descentralizada. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Imagen de Explorer 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### Pestaña de indexación -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +Si haces clic en la pestaña Indexación, encontrarás una tabla con todas las asignaciones activas e históricas hacia los subgrafos, así como gráficos que puedes analizar y ver tu desempeño anterior como Indexador. -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +Esta sección también incluirá detalles sobre las recompensas netas que obtienes como Indexador y las tarifas netas que recibes por cada consulta. Verás las siguientes métricas: -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- Delegated Stake: la participación de los Delegados que puedes asignar pero que no se puede recortar +- Total Query Fees: las tarifas totales que los usuarios han pagado por las consultas que has atendido durante tu participación +- Indexer Rewards: la cantidad total de recompensas que le Indexador ha recibido, se valora en GRT +- Fee Cut: es el porcentaje que obtendrás por las consultas que has atendido, estos se distribuyen al cerrar un ciclo o cuando te separes de tus delegadores +- Rewards Cut: este es el porcentaje de recompensas que dividirás con tus delegadores una vez se cierre el ciclo +- Owned: tu participación (stake) depositada, que podría reducirse por un comportamiento malicioso o incorrecto en la red -![Explorer Image 12](/img/Indexer-Stats.png) +![Imagen de Explorer 12](/img/Indexer-Stats.png) -### Delegating Tab +### Pestaña de delegación -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Los Delegadores son importantes para la red de The Graph. Un Delegador debe usar su conocimiento para elegir un Indexador que le proporcionará un retorno saludable y sostenibles. Aquí puedes encontrar detalles de tus delegaciones activas e históricas, junto con las métricas de los Indexadores a los que delegaste en el pasado. -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +En la primera mitad de la página, puedes ver tu gráfico de delegación, así como el gráfico de recompensas históricas. A la izquierda, puedes ver los KPI que reflejan tus métricas de delegación actuales. -The Delegator metrics you’ll see here in this tab include: +Las métricas de Delegador que verás aquí en esta pestaña incluyen: -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- Recompensas totales de delegación (Total delegation rewards) +- Recompensas totales no realizadas (Total unrealized rewards) +- Recompensas totales realizadas (Total realized rewards) -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +En la segunda mitad de la página, tienes la tabla de delegaciones. Aquí puedes ver los Indexadores a los que delegaste, así como sus detalles (como recortes de recompensas, tiempo de enfriamiento, etc.). -With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. +Con los botones en el lado derecho de la tabla, puede administrar su delegación: delegar más, quitar su delegación o retirar su delegación después del período de descongelación. -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +Con los botones situados al lado derecho de la tabla, puedes administrar tu delegación: delegar más, anular la delegación actual o retirar tu delegación después del período de descongelación. -![Explorer Image 13](/img/Delegation-Stats.png) +![Imagen de Explorer 13](/img/Delegation-Stats.png) -### Curating Tab +### Pestaña de curación -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +En la pestaña Curación, encontrarás todos los subgrafos a los que estás señalando (lo que te permite recibir tarifas de consulta). La señalización permite a los Curadores destacar un subgrafo importante y fiable a los Indexadores, dándoles a entender que debe ser indexado. -Within this tab, you’ll find an overview of: +Dentro de esta pestaña, encontrarás una descripción general de: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph -- Updated at date details +- Todos los subgrafos que estás curando con detalles de la señalización actual +- Participaciones totales en cada subgrafo +- Recompensas de consulta por cada subgrafo +- Actualizaciones de los subgrafos -![Explorer Image 14](/img/Curation-Stats.png) +![Imagen de Explorer 14](/img/Curation-Stats.png) -## Your Profile Settings +## Configuración de tu perfil -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +Dentro de tu perfil de usuario, podrás administrar los detalles de tu perfil personal (como configurar un nombre de ENS). Si eres un Indexador, tienes aún más acceso a la configuración al alcance de tu mano. En tu perfil de usuario, podrás configurar los parámetros y operadores de tu delegación. -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- Los operadores toman acciones limitadas en el protocolo en nombre del Indexador, como abrir y cerrar asignaciones. Los operadores suelen ser otras direcciones de Ethereum, separadas de su billetera de staking, con acceso cerrado a la red que los Indexadores pueden configurar personalmente +- Los parámetros de delegación te permiten controlar la distribución de GRT entre tu y tus Delegadores. -![Explorer Image 15](/img/Profile-Settings.png) +![Imagen de Explorer 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +Como tu portal oficial en el mundo de los datos descentralizados, The Graph Explorer te permite realizar una variedad de acciones, sin importar tu rol en la red. Puedes acceder a la configuración de tu perfil abriendo el menú desplegable junto a tu dirección y luego haciendo clic en el botón de configuración (settings).
![Wallet details](/img/Wallet-Details.png)
From bc375eb178b5e7ff1fc4271d6e51349a3e522316 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:53 -0500 Subject: [PATCH 237/241] New translations explorer.mdx (Arabic) --- pages/ar/explorer.mdx | 240 +++++++++++++++++++++--------------------- 1 file changed, 120 insertions(+), 120 deletions(-) diff --git a/pages/ar/explorer.mdx b/pages/ar/explorer.mdx index c8df28cfe03f..ae31b016d8a4 100644 --- a/pages/ar/explorer.mdx +++ b/pages/ar/explorer.mdx @@ -1,14 +1,14 @@ --- -title: The Graph Explorer +title: مستكشف --- -Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below): +مرحبا بك في مستكشف Graph ، أو كما نحب أن نسميها ، بوابتك اللامركزية في عالم subgraphs وبيانات الشبكة. 👩🏽‍🚀 مستكشف TheGraph يتكون من عدة اجزاء حيث يمكنك التفاعل مع مطوري Subgraphs الاخرين ، ومطوري dApp ،والمنسقين والمفهرسين، والمفوضين. للحصول على نظرة عامة حول the Graph Explorer، راجع الفيديو أدناه (أو تابع القراءة أدناه):
@@ -16,196 +16,196 @@ Welcome to the Graph Explorer, or as we like to call it, your decentralized port ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. +أولا ، إذا انتهيت من نشر Subgraphs الخاص بك في Subgraph Studio ، فإن علامة التبويب Subgraphs في الجزء العلوي من شريط التنقل هي المكان المناسب لعرض Subgraphs الخاصة بك (و Subgraphs الآخرين) على الشبكة اللامركزية. هنا ، ستتمكن من العثور على Subgraphs الذي تبحث عنه بدقة بناء على تاريخ الإنشاء أو مقدار الإشارة(signal amount) أو الاسم. -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![صورة المستكشف 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +عند النقر على Subgraphs ، يمكنك اختبار الاستعلامات وستكون قادرا على الاستفادة من تفاصيل الشبكة لاتخاذ قرارات صائبة. سيمكنك ايضا من الإشارة إلى GRT على Subgraphs الخاص بك أو subgraphs الآخرين لجعل المفهرسين على علم بأهميته وجودته. هذا أمر مهم جدا وذلك لأن الإشارة ل Subgraphs تساعد المفهرسين في اختيار ذلك ال Subgraph لفهرسته ، مما يعني أنه سيظهر على الشبكة لتقديم الاستعلامات. -![Explorer Image 2](/img/Subgraph-Details.png) +![صورة المستكشف 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +في كل صفحة مخصصة ل subgraphs ، تظهر العديد من التفاصيل. وهذا يتضمن -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- أشر/الغي الإشارة على Subgraphs +- اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى +- بدّل بين الإصدارات وذلك لاستكشاف التكرارات السابقة ل subgraphs +- استعلم عن subgraphs عن طريق GraphQL +- اختبار subgraphs في playground +- اعرض المفهرسين الذين يفهرسون Subgraphs معين +- إحصائيات subgraphs (المخصصات ، المنسقين ، إلخ) +- اعرض من قام بنشر ال Subgraphs -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![صورة المستكشف 3](/img/Explorer-Signal-Unsignal.png) -## Participants +## المشاركون -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. +ضمن علامة التبويب هذه ، ستحصل على نظرة شاملة لجميع الأشخاص المشاركين في أنشطة الشبكة ، مثل المفهرسين والمفوضين Delegators والمنسقين Curators. سندخل في نظرة شاملة أدناه لما تعنيه كل علامة تبويب. -### 1. Indexers +### 2. المنسقون Curators -![Explorer Image 4](/img/Indexer-Pane.png) +![صورة المستكشف 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Let’s start with the Indexers. دعونا نبدأ مع المفهرسين المفهرسون هم العمود الفقري للبروتوكول ، كونهم بقومون بفهرسة ال Subgraph ، وتقديم الاستعلامات إلى أي شخص يستخدم subgraphs. في جدول المفهرسين ، يمكنك رؤية البارامترات الخاصة بتفويض المفهرسين ، وحصتهم ، ومقدار ما قاموا بتحصيله في كل subgraphs ، ومقدار الإيرادات التي حصلو عليها من رسوم الاستعلام ومكافآت الفهرسة. Deep dives below: -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- اقتطاع رسوم الاستعلام Query Fee Cut - هي النسبة المئوية لخصم رسوم الاستعلام والتي يحتفظ بها المفهرس عند التقسيم مع المفوضين Delegators +- اقتطاع المكافأة الفعالة Effective Reward Cut - هو اقتطاع مكافأة الفهرسة indexing reward cut المطبقة على مجموعة التفويضات. إذا كانت سالبة ، فهذا يعني أن المفهرس يتنازل عن جزء من مكافآته. إذا كانت موجبة، فهذا يعني أن المفهرس يحتفظ ببعض مكافآته +- فترة التهدئة Cooldown المتبقية - هو الوقت المتبقي حتى يتمكن المفهرس من تغيير بارامترات التفويض. يتم إعداد فترات التهدئة من قبل المفهرسين عندما يقومون بتحديث بارامترات التفويض الخاصة بهم +- مملوكة Owned - هذه هي حصة المفهرس المودعة ، والتي قد يتم شطبها بسبب السلوك الضار أو غير الصحيح +- مفوضة Delegated - هي حصة مفوضة من قبل المفوضين والتي يمكن تخصيصها بواسطة المفهرس ، لكن لا يمكن شطبها +- مخصصة Allocated - حصة يقوم المفهرسون بتخصيصها بشكل نشط نحو subgraphs التي يقومون بفهرستها +- سعة التفويض المتاحة Available Delegation Capacity - هو مقدار الحصة المفوضة التي يمكن للمفهرسين تلقيها قبل الوصول للحد الأقصى لتلقي التفويضات overdelegated +- سعة التفويض القصوى Max Delegation Capacity - هي الحد الأقصى من الحصة المفوضة التي يمكن للمفهرس قبولها. لا يمكن استخدام الحصة المفوضة الزائدة للمخصصات allocations أو لحسابات المكافآت. +- رسوم الاستعلام Query Fees - هذا هو إجمالي الرسوم التي دفعها المستخدمون للاستعلامات التي يقدمها المفهرس طوال الوقت +- مكافآت المفهرس Indexer Rewards - هو مجموع مكافآت المفهرس التي حصل عليها المفهرس ومفوضيهم Delegators. تدفع مكافآت المفهرس ب GRT. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +يمكن للمفهرسين كسب كلا من رسوم الاستعلام ومكافآت الفهرسة. يحدث هذا عندما يقوم المشاركون في الشبكة بتفويض GRT للمفهرس. يتيح ذلك للمفهرسين تلقي رسوم الاستعلام ومكافآت بناء على بارامترات المفهرس الخاصة به. يتم تعيين بارامترات الفهرسة عن طريق النقر على الجانب الأيمن من الجدول ، أو بالانتقال إلى ملف تعريف المفهرس والنقر فوق زر "Delegate". -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +لمعرفة المزيد حول كيفية أن تصبح مفوضا كل ما عليك فعله هو التوجه إلى [ الوثائق الرسمية ](/delegating) أو [ أكاديمية The Graph ](https://docs.thegraph.academy/network/delegators). ![Indexing details pane](/img/Indexing-Details-Pane.png) -### 2. Curators +### 3. المفوضون Delegators -Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +يقوم المنسقون بتحليل ال subgraphs لتحديد ال subgraphs ذات الجودة الأعلى. عندما يجد المنسق subgraph يراه جيدا ،فيمكنه تنسيقه من خلال الإشارة إلى منحنى الترابط الخاص به. وبهذا يسمح المنسقون للمفهرسين بمعرفة ماهي ال subgraphs عالية الجودة والتي يجب فهرستها. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +يمكن للمنسقين أن يكونوا من أعضاء المجتمع أو من مستخدمي البيانات أو حتى من مطوري ال subgraph والذين يشيرون إلى ال subgraphs الخاصة بهم وذلك عن طريق إيداع توكن GRT في منحنى الترابط. وبإيداع GRT ، يقوم المنسقون بصك أسهم التنسيق في ال subgraph. نتيجة لذلك ، يكون المنسقون مؤهلين لكسب جزء من رسوم الاستعلام التي يُنشئها ال subgraph المشار إليها. يساعد منحنى الترابط المنسقين على تنسيق مصادر البيانات الأعلى جودة. جدول المنسق في هذا القسم سيسمح لك برؤية: -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- التاريخ الذي بدأ فيه المنسق بالتنسق +- عدد ال GRT الذي تم إيداعه +- عدد الأسهم التي يمتلكها المنسق -![Explorer Image 6](/img/Curation-Overview.png) +![صورة المستكشف 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) +إذا كنت تريد معرفة المزيد عن دور المنسق ، فيمكنك القيام بذلك عن طريق زيارة الروابط التالية ـ [ أكاديمية The Graph ](https://thegraph.academy/curators/) أو \[ الوثائق الرسمية. \](/ curating) -### 3. Delegators +### 3. المفوضون Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +يلعب المفوضون دورا رئيسيا في الحفاظ على الأمن واللامركزية في شبكة The Graph. يشاركون في الشبكة عن طريق تفويض (أي ، "Staking") توكن GRT إلى مفهرس واحد أو أكثر. بدون المفوضين، من غير المحتمل أن يربح المفهرسون مكافآت ورسوم مجزية. لذلك ، يسعى المفهرسون إلى جذب المفوضين من خلال منحهم جزءا من مكافآت الفهرسة ورسوم الاستعلام التي يكسبونها. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! +يقوم المفوضون بدورهم باختيار المفهرسين بناء على عدد من المتغيرات المختلفة ، مثل الأداء السابق ، ومعدلات مكافأة الفهرسة ، واقتطاع رسوم الاستعلام query fee cuts. يمكن أن تلعب السمعة داخل المجتمع دورا في هذا! يوصى بالتواصل مع المفهرسين المختارين عبر [ The Graph's Discord ](https://thegraph.com/discord) أو [ منتدى The Graph ](https://forum.thegraph.com/)! -![Explorer Image 7](/img/Delegation-Overview.png) +![صورة المستكشف 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +جدول المفوضين سيسمح لك برؤية المفوضين النشطين في المجتمع ، بالإضافة إلى مقاييس مثل: -- The number of Indexers a Delegator is delegating towards -- A Delegator’s original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated at +- عدد المفهرسين المفوض إليهم +- التفويض الأصلي للمفوض Delegator’s original delegation +- المكافآت التي جمعوها والتي لم يسحبوها من البروتوكول +- المكافآت التي تم سحبها من البروتوكول +- كمية ال GRT التي يمتلكونها حاليا في البروتوكول +- تاريخ آخر تفويض لهم -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). +If you want to learn more about how to become a Delegator, look no further! لمعرفة المزيد حول كيفية أن تصبح مفهرسا ، يمكنك إلقاء نظرة على [ الوثائق الرسمية ](/indexing) أو [ دليل مفهرس أكاديمية The Graph. -## Network +## الشبكة Network -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +في قسم الشبكة ، سترى KPIs بالإضافة إلى القدرة على التبديل بين الفترات وتحليل مقاييس الشبكة بشكل مفصل. ستمنحك هذه التفاصيل فكرة عن كيفية أداء الشبكة بمرور الوقت. -### Activity +### النشاط Activity -The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +يحتوي قسم النشاط على جميع مقاييس الشبكة الحالية بالإضافة إلى بعض المقاييس المتراكمة بمرور الوقت. هنا يمكنك رؤية أشياء مثل: -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- إجمالي حصة الشبكة الحالية +- الحصة المقسمة بين المفهرسين ومفوضيهم +- إجمالي العرض ،و الصك ،وال GRT المحروقة منذ بداية الشبكة +- إجمالي مكافآت الفهرسة منذ بداية البروتوكول +- بارامترات البروتوكول مثل مكافأة التنسيق ومعدل التضخم والمزيد +- رسوم ومكافآت الفترة الحالية -A few key details that are worth mentioning: +بعض التفاصيل الأساسية الجديرة بالذكر: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- ** رسوم الاستعلام هي الرسوم التي يولدها المستخدمون** ،ويمكن للمفهرسين المطالبة بها (أو لا) بعد مدة لا تقل عن 7 فترات (انظر أدناه) بعد إغلاق مخصصاتهم لل subgraphs والتحقق من صحة البيانات التي قدموها من قبل المستخدمين. +- ** مكافآت الفهرسة هي مقدار المكافآت التي حصل عليها المفهرسون من انتاجات الشبكة خلال الفترة. ** على الرغم من أن انتاجات البروتوكول ثابتة إلا أنه لا يتم صك المكافآت إلا بعد إغلاق المفهرسين لمخصصاتهم ل subgraphs التي قاموا بفهرستها. وبالتالي ، يختلف عدد المكافآت لكل فترة (على سبيل المثال ، خلال بعض الفترات ، ربما يكون المفهرسون قد أغلقوا المخصصات التي كانت مفتوحة لعدة أيام). -![Explorer Image 8](/img/Network-Stats.png) +![صورة المستكشف 8](/img/Network-Stats.png) -### Epochs +### الفترات Epochs -In the Epochs section you can analyse on a per-epoch basis, metrics such as: +في قسم الفترات، يمكنك تحليل مقاييس كل فترة مثل: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. +- بداية الفترة أو نهايتها +- مكافآت رسوم الاستعلام والفهرسة التي تم جمعها خلال فترة معينة +- حالة الفترة، والتي تشير إلى رسوم الاستعلام وتوزيعها ويمكن أن يكون لها حالات مختلفة: + - الفترة النشطة هي الفترة التي يقوم فيها المفهرسون حاليا بتخصيص الحصص وتحصيل رسوم الاستعلام + - فترات التسوية هي تلك الفترات التي يتم فيها تسوية قنوات الحالة state channels. هذا يعني أن المفهرسين يكونون عرضة للشطب إذا فتح المستخدمون اعتراضات ضدهم. + - فترات التوزيع هي تلك الفترات التي يتم فيها تسوية قنوات الحالة للفترات ويمكن للمفهرسين المطالبة بخصم رسوم الاستعلام الخاصة بهم. + - الفترات النهائية هي تلك الفترات التي ليس بها خصوم متبقية على رسوم الاستعلام للمطالبة بها من قبل المفهرسين ، وبالتالي يتم الانتهاء منها. -![Explorer Image 9](/img/Epoch-Stats.png) +![صورة المستكشف 9](/img/Epoch-Stats.png) -## Your User Profile +## ملف تعريف المستخدم الخاص بك -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +الآن بعد أن تحدثنا عن احصائيات الشبكة ، دعنا ننتقل إلى ملفك الشخصي. ملفك الشخصي هو المكان المناسب لك لمشاهدة نشاط الشبكة ، بغض النظر عن كيفية مشاركتك في الشبكة. ستعمل محفظة Ethereum الخاصة بك كملف تعريف المستخدم الخاص بك ، وباستخدام User Dashboard، ستتمكن من رؤية: -### Profile Overview +### نظرة عامة على الملف الشخصي -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +هذا هو المكان الذي يمكنك فيه رؤية الإجراءات الحالية التي اتخذتها. وأيضا هو المكان الذي يمكنك فيه العثور على معلومات ملفك الشخصي والوصف وموقع الويب (إذا قمت بإضافته). -![Explorer Image 10](/img/Profile-Overview.png) +![صورة المستكشف 10](/img/Profile-Overview.png) -### Subgraphs Tab +### تبويب ال Subgraphs -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +إذا قمت بالنقر على تبويب Subgraphs ، فسترى ال subgraphs المنشورة الخاصة بك. لن يشمل ذلك أي subgraphs تم نشرها ب CLI لأغراض الاختبار - لن تظهر ال subgraphs إلا عند نشرها على الشبكة اللامركزية. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![صورة المستكشف 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### تبويب الفهرسة -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +إذا قمت بالنقر على تبويب الفهرسة "Indexing " ، فستجد جدولا به جميع المخصصات النشطة والتاريخية ل subgraphs ، بالإضافة إلى المخططات التي يمكنك تحليلها ورؤية أدائك السابق كمفهرس. -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية: -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- الحصة المفوضة Delegated Stake - هي الحصة المفوضة من قبل المفوضين والتي يمكنك تخصيصها ولكن لا يمكن شطبها +- إجمالي رسوم الاستعلام Total Query Fees - هو إجمالي الرسوم التي دفعها المستخدمون مقابل الاستعلامات التي قدمتها بمرور الوقت +- مكافآت المفهرس Indexer Rewards - هو المبلغ الإجمالي لمكافآت المفهرس التي تلقيتها ك GRT +- اقتطاع الرسوم Fee Cut -هي النسبة المئوية لخصوم رسوم الاستعلام التي ستحتفظ بها عند التقسيم مع المفوضين +- اقتطاع المكافآت Rewards Cut -هي النسبة المئوية لمكافآت المفهرس التي ستحتفظ بها عند التقسيم مع المفوضين +- مملوكة Owned - هي حصتك المودعة ، والتي يمكن شطبها بسبب السلوك الضار أو غير الصحيح -![Explorer Image 12](/img/Indexer-Stats.png) +![صورة المستكشف 12](/img/Indexer-Stats.png) -### Delegating Tab +### تبويب التفويض Delegating Tab -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +المفوضون مهمون لشبكة the Graph. يجب أن يستخدم المفوض معرفته لاختيار مفهرسا يوفر عائدا على المكافآت. هنا يمكنك العثور على تفاصيل تفويضاتك النشطة والتاريخية ، مع مقاييس المفهرسين الذين قمت بتفويضهم. -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +في النصف الأول من الصفحة ، يمكنك رؤية مخطط التفويض الخاص بك ، بالإضافة إلى مخطط المكافآت فقط. إلى اليسار ، يمكنك رؤية KPIs التي تعكس مقاييس التفويض الحالية. -The Delegator metrics you’ll see here in this tab include: +مقاييس التفويض التي ستراها هنا في علامة التبويب هذه تشمل ما يلي: -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- إجمالي مكافآت التفويض +- إجمالي المكافآت الغير محققة +- إجمالي المكافآت المحققة -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +في النصف الثاني من الصفحة ، لديك جدول التفويضات. هنا يمكنك رؤية المفهرسين الذين فوضتهم ، بالإضافة إلى تفاصيلهم (مثل المكافآت المقتطعة rewards cuts، و cooldown ، الخ). With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +باستخدام الأزرار الموجودة على الجانب الأيمن من الجدول ، يمكنك إدارة تفويضاتك أو تفويض المزيد أو إلغاء التفويض أو سحب التفويض بعد فترة الذوبان thawing. -![Explorer Image 13](/img/Delegation-Stats.png) +![صورة المستكشف 13](/img/Delegation-Stats.png) -### Curating Tab +### تبويب التنسيق Curating -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +في علامة التبويب Curation ، ستجد جميع ال subgraphs التي تشير إليها (مما يتيح لك تلقي رسوم الاستعلام). الإشارة تسمح للمنسقين التوضيح للمفهرسين ماهي ال subgraphs ذات الجودة العالية والموثوقة ، مما يشير إلى ضرورة فهرستها. -Within this tab, you’ll find an overview of: +ضمن علامة التبويب هذه ، ستجد نظرة عامة حول: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- جميع ال subgraphs التي تقوم بتنسيقها مع تفاصيل الإشارة +- إجمالي الحصة لكل subgraph +- مكافآت الاستعلام لكل subgraph - Updated at date details -![Explorer Image 14](/img/Curation-Stats.png) +![صورة المستكشف 14](/img/Curation-Stats.png) -## Your Profile Settings +## إعدادات ملف التعريف الخاص بك -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +ضمن ملف تعريف المستخدم الخاص بك ، ستتمكن من إدارة تفاصيل ملفك الشخصي (مثل إعداد اسم ENS). إذا كنت مفهرسا ، فستستطيع الوصول إلى إعدادت أكثر. في ملف تعريف المستخدم الخاص بك ، ستتمكن من إعداد بارامترات التفويض والمشغلين. -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- Operators تتخذ إجراءات محدودة في البروتوكول نيابة عن المفهرس ، مثل عمليات فتح وإغلاق المخصصات. Operators هي عناوين Ethereum أخرى ، منفصلة عن محفظة staking الخاصة بهم ، مع بوابة وصول للشبكة التي يمكن للمفهرسين تعيينها بشكل شخصي +- تسمح لك بارامترات التفويض بالتحكم في توزيع GRT بينك وبين المفوضين. -![Explorer Image 15](/img/Profile-Settings.png) +![صورة المستكشف 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +كبوابتك الرسمية إلى عالم البيانات اللامركزية ، يتيح لك Graph Explorer اتخاذ مجموعة متنوعة من الإجراءات ، بغض النظر عن دورك في الشبكة. يمكنك الوصول إلى إعدادات ملفك الشخصي عن طريق فتح القائمة المنسدلة بجوار عنوانك ، ثم النقر على زر Settings. -
![Wallet details](/img/Wallet-Details.png)
+
تفاصيل المحفظة
From e2afda9d34757e340ade88c16ada9785c830342c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:54 -0500 Subject: [PATCH 238/241] New translations explorer.mdx (Japanese) --- pages/ja/explorer.mdx | 246 +++++++++++++++++++++--------------------- 1 file changed, 123 insertions(+), 123 deletions(-) diff --git a/pages/ja/explorer.mdx b/pages/ja/explorer.mdx index c8df28cfe03f..c0ed9a036920 100644 --- a/pages/ja/explorer.mdx +++ b/pages/ja/explorer.mdx @@ -1,211 +1,211 @@ --- -title: The Graph Explorer +title: エクスプローラー --- -Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below): +グラフエクスプローラーは、サブグラフとネットワークデータの世界への分散型ポータルです。 👩🏽‍🚀 グラフエクスプローラーは、他のサブグラフ開発者、dapp開発者、キュレーター、インデクサー、デリゲーターと交流できる複数のパートで構成されています。 グラフエクスプローラーの概要については、以下のビデオをご覧ください。
-## Subgraphs +## サブグラフ -First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. +まず最初に、ナビゲーションバーの上部にある「Subgraphs」タブは、分散型ネットワーク上の自分の完成したサブグラフ(および他の人のサブグラフ)を見るための場所です。 ここでは、作成日、シグナル量、名前などから、探しているサブグラフを見つけることができます。 -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![エクスプローラーイメージ 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +サブグラフをクリックすると、プレイグラウンドでクエリをテストすることができ、ネットワークの詳細を活用して情報に基づいた意思決定を行うことができます。 また、自分のサブグラフや他の人のサブグラフで GRT をシグナリングして、その重要性や品質をインデクサに認識させることができます。 これは、サブグラフにシグナルを送ることで、そのサブグラフがインデックス化され、最終的にクエリに対応するためにネットワーク上に現れてくることを意味するため、非常に重要です。 -![Explorer Image 2](/img/Subgraph-Details.png) +![エクスプローラーイメージ 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +各サブグラフの専用ページでは、いくつかの詳細が表示されます。 その内容は以下の通りです: -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- サブグラフのシグナル/アンシグナル +- チャート、現在のデプロイメント ID、その他のメタデータなどの詳細情報の表示 +- バージョンを切り替えて、サブグラフの過去のイテレーションを調べる +- GraphQL によるサブグラフのクエリ +- プレイグラウンドでのサブグラフのテスト +- 特定のサブグラフにインデクシングしているインデクサーの表示 +- サブグラフの統計情報(割り当て数、キュレーターなど) +- サブグラフを公開したエンティティの表示 -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![エクスプローラーイメージ 3](/img/Explorer-Signal-Unsignal.png) -## Participants +## 参加者 -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. +このタブでは、Indexer、Delegator、Curators など、ネットワークアクティビティに参加している全ての人を俯瞰できます。 以下では、各タブの意味を詳しく説明します。 -### 1. Indexers +### 1. インデクサー(Indexers) -![Explorer Image 4](/img/Indexer-Pane.png) +![エクスプローラーイメージ 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +まず、インデクサーから説明します。 インデクサーはプロトコルのバックボーンであり、サブグラフに利害関係を持ち、インデックスを作成し、サブグラフを消費する人にクエリを提供します。 インデクサーテーブルでは、インデクサーのデリゲーションパラメータ、ステーク、各サブグラフへのステーク量、クエリフィーとインデクシング報酬による収益を確認することができます。 詳細は以下のとおりです: -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- Query Fee Cut - デリゲーターとの分配時にインデクサーが保持するクエリーフィーリベートの割合 +- Effective Reward Cut - デリゲーションプールに適用されるインデックス報酬のカット。 これがマイナスの場合、インデクサーが報酬の一部を手放していることを意味します。 プラスの場合は、インデクサーが報酬の一部を保持していることを意味します +- Cooldown Remaining - インデクサーが上記のデリゲーションパラメータを変更できるようになるまでの残り時間です。 クールダウン期間は、インデクサーがデリゲーションパラメータを更新する際に設定します +- Owned - インデクサーが預けているステークで、悪意のある行為や不正な行為があった場合にスラッシュされる可能性があります +- Delegated - デリゲーターからのステークで、インデクサーが割り当てることができるが、スラッシュはできません +- Allocated - インデックスを作成中のサブグラフに対してインデクサーが割り当てているステーク額 +- Available Delegation Capacity - 過剰デリゲーションになる前に、インデクサーが受け取ることができるデリゲーション・ステーク量 +- Max Delegation Capacity - インデクサーが生産的に受け取ることができるデリゲーション・ステークの最大量。 過剰なデリゲーション・ステークは割り当てや報酬の計算には使用できません +- Query Fees - あるインデクサーのクエリに対してエンドユーザーが支払った手数料の合計額です +- Indexer Rewards - インデクサーとそのデリゲーターが過去に獲得したインデクサー報酬の総額。 インデクサー報酬は GRT の発行によって支払われます -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +インデクサーはクエリ報酬とインデックス報酬の両方を得ることができます。 機能的には、ネットワーク参加者が GRT をインデクサーにデリゲーションしたときに発生します。 これにより、インデクサーはそのインデクサーパラメータに応じてクエリフィーや報酬を受け取ることができます。 インデックスパラメータの設定は、表の右側をクリックするか、インデクサーのプロフィールにアクセスして「Delegate」ボタンをクリックすることで行うことができます。 -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +インデクサーになる方法については、公式ドキュメントや The Graph Academy のインデクサーガイドを参考にしてください。 -![Indexing details pane](/img/Indexing-Details-Pane.png) +![インデックス作成の詳細](/img/Indexing-Details-Pane.png) -### 2. Curators +### 2. キュレーター -Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +キュレーターはサブグラフを分析し、どのサブグラフが最高品質であるかを特定します。 キュレーターが魅力的なサブグラフを見つけたら、そのボンディングカーブにシグナルを送ることでキュレーションすることができます。 そうすることで、キュレーターはインデクサーにどのサブグラフが高品質であり、インデックスを作成すべきかを知らせることができます。 -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +キュレーターはコミュニティのメンバー、データ消費者、あるいはサブグラフの開発者でもあり、GRT トークンをボンディングカーブに預けることで自分のサブグラフにシグナルを送ります。 GRT を預け入れることで、キュレーターはサブグラフのキュレーションシェアを獲得します。 その結果、キュレーターは、自分がシグナルを送ったサブグラフが生成したクエリフィーの一部を得ることができます。 ボンディングカーブは、キュレーターが最高品質のデータソースをキュレーションする動機付けとして機能します。 このセクションの「Curator」テーブルでは、以下を確認することができます: -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- キュレーターがキュレーションを開始した日付 +- デポジットされた GRT の数 +- キュレーターが所有するシェア数 -![Explorer Image 6](/img/Curation-Overview.png) +![エクスプローラーイメージ 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) +キュレーターの役割についてさらに詳しく知りたい場合は、[The Graph Academy](https://thegraph.academy/curators/) か [official documentation.](/curating)を参照してください。 -### 3. Delegators +### 3. デリゲーター -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +デリゲーターは、グラフネットワークの安全性と分散性を維持するための重要な役割を担っています。 デリゲーターは、GRT トークンを 1 人または複数のインデクサーにデリゲート(=「ステーク」)することでネットワークに参加します。 デリゲーターがいなければ、インデクサーは大きな報酬や手数料を得ることができません。 そのため、インデクサーは獲得したインデクシング報酬やクエリフィーの一部をデリゲーターに提供することで、デリゲーターの獲得を目指します。 -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! +一方、デリゲーターは、過去の実績、インデックス作成報酬率、クエリ手数料のカット率など、さまざまな変数に基づいてインデクサーを選択します。 また、コミュニティ内での評判も関係してきます。 選ばれたインデクサーとは、 [The Graph’s Discord](https://thegraph.com/discord) や [The Graph Forum](https://forum.thegraph.com/)でつながることをお勧めします。 -![Explorer Image 7](/img/Delegation-Overview.png) +![エクスプローラーイメージ 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +「Delegators」テーブルでは、コミュニティ内のアクティブなデリゲーターを確認できるほか、以下のような指標も確認できます: -- The number of Indexers a Delegator is delegating towards -- A Delegator’s original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated at +- デリゲーターがデリゲーションしているインデクサー数 +- デリゲーターの最初のデリゲーション内容 +- デリゲーターが蓄積したがプロトコルから引き出していない報酬 +- プロトコルから撤回済みの報酬 +- 現在プロトコルに保持している GRT 総量 +- 最後にデリゲートした日 -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). +デリゲーターになるための方法をもっと知りたい方は、ぜひご覧ください。 [official documentation](/delegating) や [The Graph Academy](https://docs.thegraph.academy/network/delegators)にアクセスしてください。 -## Network +## ネットワーク -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +「Network」セクションでは、グローバルな KPI に加えて、エポック単位に切り替えてネットワークメトリクスをより詳細に分析する機能があります。 これらの詳細を見ることで、ネットワークが時系列でどのようなパフォーマンスをしているかを知ることができます。 -### Activity +### アクティビティ -The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +アクティビティセクションには、現在のすべてのネットワークメトリクスと、時系列の累積メトリクスが表示されます。 ここでは、以下のようなことがわかります: -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- 現在のネットワーク全体のステーク額 +- インデクサーとデリゲーター間のステーク配分 +- ネットワーク開始以来の総供給量、ミント量、バーン GRT +- プロトコルの開始以降のインデックス報酬総額 +- キュレーション報酬、インフレーション・レートなどのプロトコルパラメータ +- 現在のエポックの報酬と料金 -A few key details that are worth mentioning: +特筆すべき重要な詳細をいくつか挙げます: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **クエリフィーは消費者によって生成された報酬を表し**、サブグラフへの割り当てが終了し、提供したデータが消費者によって検証された後、少なくとも 7 エポック(下記参照)の期間後にインデクサが請求することができます(または請求しないこともできます)。 +- **Iインデックス報酬は、エポック期間中にインデクサーがネットワーク発行から請求した報酬の量を表しています。**プロトコルの発行は固定されていますが、報酬はインデクサーがインデックスを作成したサブグラフへの割り当てを終了して初めてミントされます。 そのため、エポックごとの報酬数は変動します(例えば、あるエポックでは、インデクサーが何日も前から開いていた割り当てをまとめて閉じたかもしれません)。 -![Explorer Image 8](/img/Network-Stats.png) +![エクスプローラーイメージ 8](/img/Network-Stats.png) -### Epochs +### エポック -In the Epochs section you can analyse on a per-epoch basis, metrics such as: +エポックセクションでは、エポックごとに以下のようなメトリクスを分析できます: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. +- エポックの開始または終了ブロック +- 特定のエポックで発生したクエリーフィーと収集されたインデクシングリワード +- エポックステータス: クエリフィーの徴収と分配に関するもので、さまざまな状態がある + - アクティブエポックとは、インデクサーが現在ステークを割り当て、クエリフィーを収集しているエポックのこと + - 決済エポックとは、状態のチャンネルを決済しているエポックのこと。 つまり、消費者がインデクサーに対して異議を唱えた場合、インデクサーはスラッシュされる可能性があるということ + - 分配エポックとは、そのエポックの状態チャンネルが確定し、インデクサーがクエリフィーのリベートを請求できるようになるエポックのこと + - 確定したエポックとは、インデクサーが請求できるクエリフィーのリベートが残っていないエポックのことで、確定している -![Explorer Image 9](/img/Epoch-Stats.png) +![エクスプローラーイメージ 9](/img/Epoch-Stats.png) -## Your User Profile +## ユーザープロファイル -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +ネットワーク統計について説明しましたが、次は個人のプロフィールについて説明します。 個人プロフィールは、ネットワークにどのように参加しているかに関わらず、自分のネットワーク活動を確認するための場所です。 あなたの Ethereum ウォレットがあなたのユーザープロフィールとして機能し、ユーザーダッシュボードで確認することができます。 -### Profile Overview +### プロフィールの概要 -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +ここでは、あなたが現在行ったアクションを確認できます。 また、自分のプロフィール情報、説明、ウェブサイト(追加した場合)もここに表示されます。 -![Explorer Image 10](/img/Profile-Overview.png) +![エクスプローラーイメージ 10](/img/Profile-Overview.png) -### Subgraphs Tab +### サブグラフタブ -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +「Subgraphs」タブをクリックすると、公開されているサブグラフが表示されます。 サブグラフは分散型ネットワークに公開されたときにのみ表示されます。 -![Explorer Image 11](/img/Subgraphs-Overview.png) +![エクスプローラーイメージ 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### インデックスタブ -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +「Indexing」タブをクリックすると、サブグラフに対するすべてのアクティブな割り当てと過去の割り当てが表になっており、分析してインデクサーとしての過去のパフォーマンスを見ることができるチャートも表示されます。 -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +このセクションには、インデクサー報酬とクエリフィーの詳細も含まれます。 以下のような指標が表示されます: -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- Delegated Stake - Delegator からのステークで、あなたが割り当て可能だが、スラッシュされないもの +- Total Query Fees - 提供したクエリに対してユーザーが支払った料金の合計額 +- Indexer Rewards - 受け取ったインデクサー報酬の総額(GRT) +- Fee Cut - デリゲーターとの分配時に保持するクエリフィーリベートの割合 +- Rewards Cut - デリゲーターとの分配時に保有するインデクサー報酬の割合 +- Owned - 預けているステークであり、悪質な行為や不正行為があった場合にスラッシュされる可能性がある -![Explorer Image 12](/img/Indexer-Stats.png) +![エクスプローラーイメージ 12](/img/Indexer-Stats.png) -### Delegating Tab +### デリゲーションタブ -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +デリゲーターは、グラフネットワークにとって重要な存在です。 デリゲーターは知見を駆使して、健全な報酬を提供するインデクサーを選ばなければなりません。 このタブでは、アクティブなデリゲーションの詳細と過去の履歴、そしてデリゲートしたインデクサーの各指標を確認することができます。 -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +ページの前半には、自分のデリゲーションチャートと報酬のみのチャートが表示されています。 左側には、現在のデリゲーションメトリクスを反映した KPI が表示されています。 -The Delegator metrics you’ll see here in this tab include: +このタブで見ることができるデリゲーターの指標は以下の通りです。 -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- デリゲーション報酬の合計 +- 未実現報酬の合計 +- 実現報酬の合計 -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +ページの後半には、デリゲーションテーブルがあります。 ここには、あなたがデリゲートしたインデクサーとその詳細(報酬のカットやクールダウンなど)が表示されています。 -With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. +テーブルの右側にあるボタンで、デリゲートを管理することができます。追加でデリゲートする、デリゲートを解除する、解凍期間後にデリゲートを取り消すなどの操作が可能です。 -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +表の右側にあるボタンで、デリゲーションを管理することができます。 -![Explorer Image 13](/img/Delegation-Stats.png) +![エクスプローラーイメージ 13](/img/Delegation-Stats.png) -### Curating Tab +### キュレーションタブ -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +「Curation」タブでは、自分がシグナリングしている(その結果、クエリフィーを受け取ることができる)サブグラフを確認することができます。 シグナリングにより、キュレーターはインデクサーに対して、どのサブグラフが価値があり信頼できるかを強調することができ、その結果、そのサブグラフにインデックスを付ける必要があることを示すことができます。 -Within this tab, you’ll find an overview of: +このタブでは、以下の概要を見ることができます: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph -- Updated at date details +- キュレーションしている全てのサブグラフとシグナルの詳細 +- サブグラフごとのシェアの合計 +- サブグラフごとのクエリ報酬 +- 更新日の詳細 -![Explorer Image 14](/img/Curation-Stats.png) +![エクスプローラーイメージ 14](/img/Curation-Stats.png) -## Your Profile Settings +## プロフィールの設定 -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +ユーザープロフィールでは、個人的なプロフィールの詳細(ENS ネームの設定など)を管理することができます。 インデクサーの方は、さらに多くの設定が可能です。 ユーザープロファイルでは、デリゲーションパラメーターとオペレーターを設定することができます。 -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- オペレーターは、インデクサーに代わって、割り当ての開始や終了など、プロトコル上の限定的なアクションを行います。 オペレーターは通常、ステーキングウォレットとは別の他の Ethereum アドレスで、インデクサーが個人的に設定できるネットワークへのゲート付きアクセス権を持っています。 +- 「Delegation parameters」では、自分とデリゲーターの間で GRT の分配をコントロールすることができます。 -![Explorer Image 15](/img/Profile-Settings.png) +![エクスプローラーイメージ 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +グラフエクスプローラーは、分散型データの世界への公式ポータルとして、ネットワーク内でのあなたの役割に関わらず、様々なアクションを取ることができます。 アドレスの横にあるドロップダウンメニューを開き、「Settings」ボタンをクリックすると、自分のプロフィール設定ができます。
![Wallet details](/img/Wallet-Details.png)
From 2627f40874bac2f41c64adb22fd2cdd1261c0127 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:55 -0500 Subject: [PATCH 239/241] New translations explorer.mdx (Korean) --- pages/ko/explorer.mdx | 212 +++++++++++++++++++++--------------------- 1 file changed, 106 insertions(+), 106 deletions(-) diff --git a/pages/ko/explorer.mdx b/pages/ko/explorer.mdx index c8df28cfe03f..816139ae9a58 100644 --- a/pages/ko/explorer.mdx +++ b/pages/ko/explorer.mdx @@ -1,211 +1,211 @@ --- -title: The Graph Explorer +title: 탐색기 --- -Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below): +그래프 탐색기, 혹은 우리가 흔히 부르는 것 처럼, 서브그래프와 네트워크 데이터의 세계로 향하는 탈중앙화 포탈에 오신것을 환영합니다! 그래프 탐색기는 다른 서브그래프 개발자, dapp 개발자, 큐레이터, 인덱서 및 위임자와 상호 작용할 수 있는 다양한 부분들로 구성됩니다. 그래프 탐색기에 대한 일반적인 개요를 알아보기 위해 아래의 비디오를 확인하세요.
-## Subgraphs +## 서브그래프 -First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. +먼저, 여러분들이 막 여러분의 서브그래프 스튜디오에서 서브그래프를 배포 및 게시한 경우, 네비게이션 바 상단에 있는 서브그래프 탭은 분산형 네트워크에서 여러분들 소유의 완료된 서브그래프(및 다른 사람의 서브그래프)를 볼 수 있는 장소입니다. 여기에서 여러분들은 생성된 날짜, 신호 양 또는 이름을 기준으로 찾고 있는 정확한 서브그래프를 찾을 수 있습니다. ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +여러분들이 서브그래프를 클릭하면, 플레이그라운드에서 쿼리를 테스트하고 네트워크 세부 정보를 활용하여 정보에 입각한 결정을 내릴 수 있습니다. 또한 여러분들은 자신의 서브그래프 또는 다른 사람의 서브그래프에 GRT 신호를 보내어, 인덱서가 그 중요성과 품질을 인식하도록 할 수도 있습니다. 이것은 서브그래프의 신호가 인덱싱되도록 인센티브를 부여하기 때문에 매우 중요합니다. 이는 결국 쿼리를 제공하기 위해 네트워크에 표시된다는 것을 의미합니다. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +각 서브그래프의 전용 페이지에는 몇 가지 세부 정보가 표시됩니다. 이러한 사항들이 포함되어 있습니다: -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- 서브그래프 상의 시그널/언시그널 +- 차트, 현재 배포 ID 및 다른 메타데이터와 같은 더욱 자세한 정보 보기 +- 서브그래프의 과거 반복 과정을 탐색하기 위한 버전 전환 +- GraphQL을 통한 서브그래프 쿼리 +- 플레이그라운드에서의 서브그래프 테스트 +- 특정 서브그래프에 인덱싱하는 인덱서 보기 +- 서브그래프 상태 (할당, 큐레이터, 기타사항) +- 서브그래프를 게시한 엔티티 보기 ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## Participants +## 참여자 -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. +이 탭에서는 인덱서, 위임자 및 큐레이터와 같이 네트워크 활동에 참여하는 모든 주체들을 조감도로 볼 수 있습니다. 아래에서, 저희는 여러분들을 위해 각 탭이 의미하는 바가 무엇인지 자세히 살펴보겠습니다. -### 1. Indexers +### 1. 인덱서 ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +인덱서부터 시작해보도록 하겠습니다. 인덱서는 프로토콜의 백본으로, 이들은 서브그래프에 스테이킹 및 인덱싱을 수행하고, 서브그래프를 사용하는 모든 사람에게 쿼리를 제공합니다. 인덱서 테이블에서 여러분들은 인덱서의 위임 매개변수, 그들의 스테이킹, 각 서브그래프에 대한 스테이킹, 쿼리 수수료 및 인덱싱 보상으로 얻은 수익을 볼 수 있습니다. 좀 더 심청적인 내용은 아래와 같습니다: -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- Query Fee Cut - 위임자들과 쿼리 피를 나눌 때, 인덱서가 가져가는 쿼리 수수료의 리베이트 비율 +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. 이 항목이 양수이면, 이는 그 인덱서가 그들의 보상의 일부분을 수취함을 있음을 의미합니다. If it’s positive, it means that the Indexer is keeping some of their rewards +- Cooldown Remaining - 인덱서가 위의 위임 매개변수를 변경할 수 있을 때까지 남은 시간입니다. Cooldown 기간은 인덱서가 그들의 위임 매개변수들을 업데이트 할 때 인덱서에 의해 설정됩니다. +- Owned - 이것은 인덱서의 예치된 스테이킹 내역이며, 악의적이거나 잘못된 행동으로 인해 슬래싱 패널티를 받을 수 있습니다. +- Delegated - 인덱서에 의해 할당될 수는 있지만, 슬래싱 패널티는 받을 수 없는 위임자들의 스테이킹 지분입니다. +- Allocated - 인덱서들이 그들이 인덱싱하는 서브그래프에 적극적으로 할당하는 스테이킹 지분입니다. +- Available Delegation Capacity - 인덱서가 위임 수용력 이상으로 과도하게 위임받기 전, 인덱서들이 여전히 받을 수 위임 스테이킹 수량입니다. +- Max Delegation Capacity - 인덱서가 생산적으로 수용할 수 있는 지분 위임 최대 수량입니다. 이를 초과하여 위임받은 지분들의 경우, 할당 혹은 보상 계산에 사용될 수 업습니다. +- Query Fees - 이는 최종 사용자들이 모든 시간 동안 인덱서들의 쿼리들에 대하여 지불해야하는 총 수수료입니다. +- Indexer Rewards - 이는 모든 시간 동안 인덱서 및 그들의 위임자들이 창출하는 총 인덱서 보상입니다. 인덱서 보상은 GRT 발행을 통해 지급됩니다. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +인덱서들은 쿼리 수수료와 인덱싱 보상을 모두 얻을 수 있습니다. 기능적으로, 이는 네트워크 참가자가 GRT를 인덱서에 위임할 때 발생합니다. 이를 통해 인덱서는 인덱서 매개변수에 따라 쿼리 수수료와 보상을 받을 수 있습니다. 인덱싱 매개변수는 테이블의 오른쪽을 클릭하거나 인덱서의 프로필로 이동하여 "Delegate" 버튼을 클릭하여 설정합니다. -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +인덱서가 되는 방법에 대해 더 자세히 알아보고 싶으신 분들은, [official documentation](/indexing) 혹은 [The Graph Academy Indexer guides](https://thegraph.academy/delegators/choosing-indexers/)를 확인해보시길 바랍니다. ![Indexing details pane](/img/Indexing-Details-Pane.png) -### 2. Curators +### 2. 큐레이터 -Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +큐레이터는 서브그래프들을 분석하여 어떤 서브그래프가 최고 품질의 서브그래프인지를 식별합니다. 일단 큐레이터가 잠재적으로 매력적인 서브그래프를 발견하면, 그들은 본딩 커브에 신호를 보내서 그것을 큐레이션 할 수 있습니다. 이를 통해 큐레이터는 인덱서에게 어떤 서브래프가 고품질이고, 인덱싱 되어야 하는지를 알려줍니다. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +큐레이터는 커뮤니티 구성원, 데이터 소비자, 혹은 심지어 GRT 토큰을 본딩 커브에 넣음으로써 자신의 서브그래프에 신호를 보내는 서브그래프 개발자가 될 수 있습니다. GRT를 예치함으로써 큐레이터는 서브그래프의 큐레이션 쉐어를 발행합니다. 결과적으로 큐레이터는 그들이 신호한 서브그래프가 생성하는 쿼리 수수료의 일부를 얻을 수 있습니다. 본딩 커브는 큐레이터가 최고 품질의 데이터 소스를 큐레이션하도록 동기부여를 합니다. 이 섹션의 큐레이터 테이블에서 다음 사항들을 확인할 수 있습니다. -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- 큐레이터가 큐레이팅을 시작한 날 +- 예치된 GRT의 수 +- 큐레이터가 소유한 쉐어 수 ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) +만약, 여러분들이 큐레이터의 역할에 대해 더 알고 싶으시다면, [The Graph Academy](https://thegraph.academy/curators/) 혹은 [official documentation](/curating) 링크를 클릭하셔서 더욱 자세히 살펴보시기 바랍니다. -### 3. Delegators +### 3. 위임자 -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +위임자는 더그래프 네트워크의 보안 및 분산화 유지에 중요한 역할을 수행합니다. 이들은 하나 이상의 인덱서에 GRT 토큰을 위임(즉, "스테이킹")하여 네트워크에 참여합니다. 위임자 없이는, 인덱서가 많은 양의 보상과 수수료를 받을 가능성이 줄어듭니다. 따라서 인덱서들은 인덱싱 보상 및 쿼리 수수료의 일부를 위임자들에게 제공하는 정책을 통해 위임자들을 유치합니다. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! +반면에, 위임자들은 과거 성과, 인덱싱 보상률, query fee cuts 등 다양한 변수들을 기준으로 인덱서를 선택합니다. 커뮤니티 내에서의 명성 또한 이에 한 요소로 작용할 수 있습니다. [더그래프 디스코드](https://thegraph.com/discord) 혹은 [더그래프 포럼](https://forum.thegraph.com/)을 통해 인덱서들과 소통하시길 추천드립니다! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +위임자 테이블에서는 커뮤니티 내의 활성 위임자들 및 다음과 같은 메트릭스를 볼 수 있습니다. -- The number of Indexers a Delegator is delegating towards -- A Delegator’s original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated at +- 어떠한 위임자가 위임을 시행하고 있는 인덱서들의 수 +- 어떠한 위임자의 본 위임 +- 그들이 축적하였지만, 프로토콜로부터 인출하지 않은 보상들 +- 그들이 프로토콜로부터 인출하여 실현된 보상들 +- 그들이 현재 프로토콜 상에 보유하고 있는 GRT의 총 수량 +- 그들이 마지막으로 위임 행위를 한 날짜 -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). +위임자가 되는 방법에 대해 더 알고 싶으시다면, 더 둘러보실 필요 없습니다! 여러분들이 지금 하셔야 할 일은 [official documentation](/delegating) 혹은 [The Graph Academy](https://docs.thegraph.academy/network/delegators)에 방문 하는 것입니다! -## Network +## 네트워크 -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +네트워크 섹션에서 여러분들은 에폭을 기준으로 전환하는 전환하는 능력 뿐만 아니라, 글로벌 KPI 및 네트워크 메트릭을 보다 자세히 분석할 수 있는 기능을 보실 수 있습니다. 이러한 세부 정보를 통해 시간이 지남에 따라 네트워크가 어떻게 작동하는지 알 수 있습니다. -### Activity +### 활동 -The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +활동 섹션에는 모든 현재 네트워크 메트릭스와 시간에 따른 일부 누적 메트릭이 있습니다. 여기서 여러분들은 다음과 같은 사항들을 볼 수 있습니다. -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- 현재 네트워크 스테이킹 총량 +- 인덱서와 그들의 위임자 사이의 스테이킹 분할 내역 +- 네트워크 시작 이후 GRT의 총 공급량, 발행량 및 소각량 +- 프로토콜 시작 이후 총 인덱싱 보상들 +- 보상, 인플레이션 비율 등과 같은 프로토콜 파라미터 +- 현재 에폭 보상 및 수수료들 -A few key details that are worth mentioning: +언급할만한 가치가 있는 몇 가지 주요 세부정보 : -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **쿼리 수수료는 소비자들에 의해 생성된 수수료들을 나타냅니다.** 그리고 이들은 해당 서브그래프에 대한 인덱서들의 할당이 종료되고 소비자가 제공한 데이터들이 검증된 다음, 최소 7 에폭의 기간이 지난 이후에 인덱서들에 의해 클레임(혹은 클레임 불가)될 수 있습니다. +- **인덱싱 보상은 해당 에폭 동안 네트워크 발행으로부터 인덱서가 청구한 보상 금액을 나타냅니다.** 프로토콜 발행은 고정되어 있더라도, 해당 보상은 인덱서가 인덱싱 중인 서브그래프에 대한 할당을 닫아야지만 발행됩니다. 따라서, 에폭 마다 보상 횟수는 다양합니다(예: 일부 에폭 동안에, 인덱서는 며칠 동안 열려 있던 할당을 일괄적으로 닫았을 수 있습니다). ![Explorer Image 8](/img/Network-Stats.png) -### Epochs +### 에폭(Epochs) In the Epochs section you can analyse on a per-epoch basis, metrics such as: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. +- 에폭 시작 혹은 종료 블록 +- 특정 에포크 동안 생성된 쿼리 수수료 및 인덱싱 보상 +- 에폭 상태(Epoch status)는 다음과 같은 다양한 상태를 가질 수 있는 쿼리 수수료 수집 및 분배를 나타냅니다. + - 활성 에폭(The active epoch)은 현재 인덱서가 지분을 할당 및 쿼리 수수료 수집을 진행하고 있는 에폭입니다. + - 결산 에폭(The settling epochs)은 상태 채널이 결산되고 있는 에폭입니다. 이는 소비자가 인덱서를 상대로 분쟁을 제기하는 경우, 해당 인덱서는 슬래싱 패널티를 받을 수 있음을 의미합니다. + - 분배 에폭(The distributing epochs)은 해당 에폭들에 대한 상태 채널이 정산되고 인덱서가 쿼리 수수료 리베이트를 청구할 수 있는 에폭들입니다. - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. ![Explorer Image 9](/img/Epoch-Stats.png) -## Your User Profile +## 여러분들의 사용자 프로필 -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +저희는 네트워크 통계에 대해 이야기했으므로, 이제 개인 프로필로 넘어가 보도록 하겠습니다. 여러분들이 네트워크에 참여하는 방식에 관계없이, 여러분들의 개인 프로필은 여러분들의 네트워크 활동을 볼 수 있는 영역입니다. 여러분들의 이더리움 지갑이 사용자 프로필 역할을 하며, 여러분들은 사용자 대시보드를 통해 다음 사항들을 확인 가능합니다 : -### Profile Overview +### 프로필 개요 -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +이곳에서 여러분들은 이전에 수행한 현황을 확인할 수 있습니다. 이곳에서 여러분들은 프로필 정보, 설명 및 웹사이트(추가한 경우) 또한 찾으실 수 있습니다. ![Explorer Image 10](/img/Profile-Overview.png) -### Subgraphs Tab +### 서브그래프 탭 -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +서브그래프 탭을 클릭하면 배포된 서브그래프들이 표시됩니다. 여기에는 테스트 목적으로 CLI와 함께 배포된 서브그래프는 포함되지 않습니다. - 서브그래프는 탈중앙화 네트워크에 배포될 때만 표시됩니다. ![Explorer Image 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### 인덱싱 탭 -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +만약 여러분들이 인덱싱 탭을 클릭하면, 서브그래프에 대한 모든 활성 및 과거 할당 내역들을 볼 수 있는 테이블이 존재하며, 인덱서로서 여러분들의 과거 성과를 분석하고 볼 수 있는 차트 또한 찾을 수 있습니다. -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +이 섹션에는 순 인덱서 보상 및 순 쿼리 수수료에 대한 세부 정보도 포함됩니다. 여러분들은 다음의 메트릭스들을 확인 가능합니다. -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- Delegated Stake - 여러분들에 의해 할당될 수는 있지만, 슬래싱 패널티는 받지 않는 위임자의 지분 +- Total Query Fees - 시간이 지남에 따라 여러분이 제공한 쿼리에 대해 사용자가 지불한 총 수수료 +- Indexer Rewards - 여러분들이 GRT 로 받은 인덱서 보상의 총 수량 +- Fee Cut - 여러분들이 쿼리 수수료를 위임자들과 나눌 때, 여러분들이 취하는 쿼리 수수료의 비율(%) +- Rewards Cut - 여러분들이 인덱서 수수료를 위임자들과 나눌 때, 여러분들이 취하는 인덱서 보상의 비율(%) +- Owned - 악의적인 행동이나 잘못된 행동으로 인해 삭감 패널티를 받을 수 있는 여러분들이 예치한 스테이킹 수량 ![Explorer Image 12](/img/Indexer-Stats.png) -### Delegating Tab +### 위임 탭 -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +위임자들은 더그래프 네트워크에 매우 중요합니다. 위임자는 자신의 지식을 사용하여 정상적인 보상 수익을 제공할 인덱서를 선택해야 합니다. 여기서 여러분들은 활성 및 과거 위임의 세부 정보를 찾을 수 있으며, 동시에 위임한 인덱서의 메트릭스를 확인할 수 있습니다. -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +페이지의 처음 절반은 위임 차트와 보상 전용 차트가 표시됩니다. 왼쪽에는 여러분의 현재 위임 메트릭스를 반영하는 KPI가 표시됩니다. -The Delegator metrics you’ll see here in this tab include: +이 탭에서 볼 수 있는 위임자 메트릭스는 다음과 같습니다 : -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- 총 위임 보상 +- 총 미실현 보상 +- 총 실현 보상 -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +페이지 후반에는 여러분의 위임 표가 존재합니다. 여기서 여러분들은 위임한 인덱서와 해당 세부 정보(예: rewards cuts, 재사용 대기 시간 등)를 볼 수 있습니다. With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +표의 오른쪽에 있는 버튼을 사용하여 여러분들의 위임을 관리할 수 있습니다(추가 위임, 위임 취소 혹은 해빙 기간 이후 위임에 대한 출금). ![Explorer Image 13](/img/Delegation-Stats.png) -### Curating Tab +### 큐레이팅 탭 -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +큐레이션 탭에서 여러분들은 여러분들이 신호를 보내고 있는 모든 서브그래프들을 찾을 수 있습니다.(이로인해 여러분들은 쿼리 수수료를 받을 수 있습니다.) 시그널링을 통해 큐레이터는 인덱서들에게 어떤 서브그래프가 가치 있고 신뢰할 수 있는지를 강조할 수 있으므로, 이들이 인덱싱 되어야 한다는 신호를 보낼 수 있게됩니다. -Within this tab, you’ll find an overview of: +이 탭 내에서 여러분들은 다음 사항들의 개요를 확인할 수 있습니다 : -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph -- Updated at date details +- 신호 명세사항들과 함께 여러분이 신호를 보내고 있는 모든 서브그래프들 +- 서브그래프 별 총 쉐어 +- 서브그래프 별 쿼리 보상들 +- 날짜 세부정보에 대한 업데이트 내역 ![Explorer Image 14](/img/Curation-Stats.png) -## Your Profile Settings +## 프로필 설정 -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +사용자 프로필 내에서, 개인 프로필 세부 정보(예: ENS 네임 설정)를 관리할 수 있습니다. 만약 여러분들이 인덱서라면, 간편하게 설정에 접근할 수 있습니다. 여러분들의 유저 프로필 내에서, 여러분들은 여러분들의 위임 매개변수 및 운영자 설정을 할 수 있습니다. -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- 운영자는 프로토콜에서 인덱서를 대신하여 할당 열기 및 닫기와 같은 제한된 작업을 수행합니다. 운영자는 일반적으로 인덱서가 개인적으로 설정할 수 있는, 네트워크에 대한 게이트 액세스가 되어있는, 스테이킹 지갑과는 별도의 다른 이더리움 주소입니다. +- 위임 매개변수를 사용하면 여러분들과 여러분들의 위임자 간의 GRT 분배를 제어할 수 있습니다. ![Explorer Image 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +탈중앙화된 데이터의 세계로 향하는 공식 포털인 더그래프 탐색기를 사용하면, 네트워크에서의 역할에 상관없이 다양한 행위가 가능합니다. 여러분들의 주소 옆에 있는 드롭다운 메뉴를 연 다음, 설정 버튼을 클릭하면 프로필 설정으로 이동할 수 있습니다. -
![Wallet details](/img/Wallet-Details.png)
+
Wallet details
From 18130220b912daae8d5ae7fe697d6888a75ec985 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Rouleau?= Date: Thu, 27 Jan 2022 20:11:56 -0500 Subject: [PATCH 240/241] New translations explorer.mdx (Chinese Simplified) --- pages/zh/explorer.mdx | 216 +++++++++++++++++++++--------------------- 1 file changed, 108 insertions(+), 108 deletions(-) diff --git a/pages/zh/explorer.mdx b/pages/zh/explorer.mdx index c8df28cfe03f..85698d600f9e 100644 --- a/pages/zh/explorer.mdx +++ b/pages/zh/explorer.mdx @@ -1,8 +1,8 @@ --- -title: The Graph Explorer +title: 浏览器 --- -Welcome to the Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 The Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of the Graph Explorer, check out the video below (or keep reading below): +欢迎使用 Graph 浏览器,或者我们可以称它为您进入子图和网络数据世界的去中心化门户。 The Graph 浏览器由多个部分组成,您可以在其中与其他子图开发人员、去中心化应用开发人员、策展人、索引人和 委托人进行交互。 有关 Graph 浏览器的通用概述,请查看下面的视频(或继续阅读下面的内容):
-## Subgraphs +## 子图 -First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on date created, signal amount, or name. +首先,如果您刚刚在 子图工作室中完成部署和发布您的子图,导航栏顶部的 子图选项卡是您在去中心化网络上查看您自己完成的子图(以及其他人的子图)的地方。 在这里,您将能够根据创建日期、信号量或名称找到您正在寻找的确切子图。 ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +当您单击子图时,您将能够在面板上测试查询,并能够利用网络详细信息做出明智的决策。 您还可以在您自己的子图或其他人的子图中发出 GRT 信号,以使索引人意识到其重要性和质量。 这很关键,因为子图上的信号会激励它被索引,这意味着它将出现在网络上,最终为查询提供服务。 ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +在每个子图的专用页面上,会显示一些详细信息。 这些包括: -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- 子图上的信号/非信号 +- 查看更多详细信息,例如图表、当前部署 ID 和其他元数据 +- 切换版本以探索子图的过去迭代版本 +- 通过 GraphQL 查询子图 +- 在面板上测试子图 +- 查看在某个子图上建立索引的索引人 +- 子图统计信息(分配、策展人等) +- 查看发布子图的实体 ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) -## Participants +## 参与者 -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in depth review of what each tab means for you. +在此选项卡中,您可以鸟瞰所有参与网络活动的人员,例如索引人、委托人和策展人。 下面,我们将深入了解每个选项卡对您的意义。 -### 1. Indexers +### 1. 索引人 ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +让我们从索引人开始。 索引人是协议的骨干,是那些质押于子图、索引它们并向使用子图的任何人提供查询服务的人。 在 索引人表中,您将能够看到 索引人的委托参数、他们的权益、他们对每个子图的权益以及他们从查询费用和索引奖励中获得的收入。 细则如下: -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become overdelegated -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. Excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- 查询费用削减 - 索引人与委托人拆分时保留的查询费用回扣的百分比 +- 有效的奖励削减 - 应用于委托池的索引奖励削减。 如果它是负数,则意味着索引人正在赠送部分奖励。 如果是正数,则意味着 索引人保留了他们的一些奖励 +- 冷却时间剩余 - 索引人可以更改上述委托参数之前的剩余时间。 冷却时间由索引人在更新其委托参数时设置 +- 已拥有 - 这是索引人的存入股份,可能会因恶意或不正确的行为而被削减 +- 已委托 - 委托人的股权可以由索引人分配,但不能被削减 +- 已分配 - 索引人积极分配给他们正在索引的子图的股权 +- 可用委托容量 - 索引人在过度委托之前仍然可以收到的委托权益数量 +- 最大委托容量 - 索引人可以有效接受的最大委托权益数量。 超出的委托权益不能用于分配或奖励计算。 +- 查询费用 - 这是最终用户一直以来为来自索引人的查询支付的总费用 +- 索引人奖励 - 这是索引人及其委托人在所有时间获得的总索引人奖励。 索引人奖励通过 GRT 发行支付。 -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking into the right hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +索引人可以获得查询费用和索引奖励。 从功能上讲,当网络参与者将 GRT 委托给索引人时,就会发生这种情况。 这使索引人能够根据其索引人参数接收查询费用和奖励。 索引参数可以通过点击表格的右侧来设置,或者通过进入索引人的配置文件并点击“委托”按钮来设置。 -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +如果您想了解有关 Curator 角色的更多信息,可以通过访问 [The Graph Academy](https://thegraph.academy/curators/)或者 [官方文档](/curating)来实现。 ![Indexing details pane](/img/Indexing-Details-Pane.png) -### 2. Curators +### 2. 策展人 -Curators analyze subgraphs to identify which subgraphs are of highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +策展人分析子图以确定哪些子图质量最高。 一旦策展人发现了一个潜在有吸引力的子图,他们就可以通过在其粘合曲线上发出信号来策展它。 在这样做时,策展人让索引人知道哪些子图是高质量的并且应该被索引。 -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +策展人可以是社区成员、数据消费者,甚至是子图开发者,他们通过将 GRT 代币存入粘合曲线来在自己的子图上发出信号。 通过存入 GRT,策展人铸造了子图的策展份额。 因此,策展人有资格获得他们发出信号的子图生成的一部分查询费用。 粘合曲线激励策展人策展最高质量的数据源。 本节中的 策展人表将允许您查看: -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- 策展人开始策展的日期 +- 已存入的 GRT 数量 +- 策展人拥有的股份数量 ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/curating) +如果你想了解更多关于策展人角色的信息,你可以通过访问 [The Graph Academy](https://thegraph.academy/curators/) 的以下链接或[官方文档](/curating)来实现。 -### 3. Delegators +### 3. 委托人 -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +委托人在维护 The Graph 网络的安全性和去中心化方面发挥着关键作用。 他们通过将 GRT 代币委托给一个或多个索引人(即“质押”)来参与网络。 如果没有委托人,索引人不太可能获得可观的奖励和费用。 因此,索引人试图通过向委托人提供他们获得的一部分索引奖励和查询费用来吸引委托人。 -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://thegraph.com/discord) or [The Graph Forum](https://forum.thegraph.com/)! +委托人反过来根据许多不同的变量选择索引人,例如过去的表现、索引奖励率和查询费用削减。 社区内的声誉也可以起到一定的作用! 建议连接通过[The Graph’s Discord](https://thegraph.com/discord) 或者 [The Graph 论坛](https://forum.thegraph.com/)选择索引人! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +委托人表将允许您查看社区中的活跃委托人,以及以下指标: -- The number of Indexers a Delegator is delegating towards -- A Delegator’s original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated at +- 委托人委托给的索引人数量 +- 委托人的原始委托 +- 他们已经积累但没有退出协议的奖励 +- 他们从协议中撤回的已实现奖励 +- 他们目前在协议中的 GRT 总量 +- 他们上次授权的日期 -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/delegating) or [The Graph Academy](https://docs.thegraph.academy/network/delegators). +如果您想了解更多有关如何成为委托人的信息,请不要再犹豫了! 您所要做的就是前往 [官方文档](/delegating) 或者 [The Graph Academy](https://docs.thegraph.academy/network/delegators). -## Network +## 网络 -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +在网络部分,您将看到全局 KPI 以及切换到每个时期的基础和更详细地分析网络指标的能力。 这些详细信息将让您了解网络随时间推移的表现。 -### Activity +### 活动 -The activity section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +活动部分包含所有当前网络指标以及一些随时间累积的指标。 在这里,您可以看到以下内容: -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- 当前网络总质押量 +- 索引人和他们的委托人之间的股份分配 +- 自网络成立以来的总供应量、铸造和燃烧的 GRT +- 自协议成立以来的总索引奖励 +- 协议参数,例如管理奖励、通货膨胀率等 +- 当前时期奖励和费用 -A few key details that are worth mentioning: +一些值得一提的关键细节: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **查询费用代表消费者产生的费用,**在他们对子图的分配已经关闭并且他们提供的数据已经被关闭后,在至少 7 个周期(见下文)之后,索引人可以要求(或不要求)它们得到消费者的认可。 +- **索引奖励代表索引人在该时期从网络发行中索取的奖励数量。 ** 尽管协议发布是固定的,但只有当索引人关闭对他们一直在索引的子图的分配时才会产生奖励。 因此,每个时期的奖励数量是不同的(即,在某些时期,索引人可能会集体关闭已开放多天的分配)。 ![Explorer Image 8](/img/Network-Stats.png) -### Epochs +### 时期 -In the Epochs section you can analyse on a per-epoch basis, metrics such as: +在 时期部分,您可以在每个 时期的基础上分析指标,例如: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. +- 时期开始或结束块 +- 在特定时期产生的查询费用和索引奖励 +- 时期状态,指的是查询费用的收取和分配,可以有不同的状态: + - 活跃时期是索引人目前正在分配权益并收取查询费用的时期 + - 稳定时期是状态通道正在稳定的时期。 这意味着如果消费者对他们提出争议,索引人将受到严厉惩罚。 + - 分发 时期是 时期的状态通道正在结算的 时期,索引人可以要求他们的查询费用回扣。 + - 最终确定的时期是索引人没有留下查询费回扣的时期,因此被最终确定。 ![Explorer Image 9](/img/Epoch-Stats.png) -## Your User Profile +## 您的用户资料 -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your Ethereum wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +既然我们已经讨论了网络统计信息,让我们继续讨论您的个人资料。 无论您以何种方式参与网络,您的个人资料都是您查看网络活动的地方。 您的以太坊钱包将作为您的用户资料,通过用户仪表板,您将能够看到: -### Profile Overview +### 个人资料概览 -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +您可以在此处查看您当前采取的任何操作。 您也可以在这里找到您的个人资料信息、描述和网站(如果您添加了)。 ![Explorer Image 10](/img/Profile-Overview.png) -### Subgraphs Tab +### 子图标签 -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +如果单击子图选项卡,您将看到已发布的子图。 这将不包括为测试目的使用 CLI 部署的任何子图——子图只会在它们发布到去中心化网络时显示。 ![Explorer Image 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### 索引标签 -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +如果您单击“索引”选项卡,您将找到一个表格,其中包含对子图的所有活动和历史分配,以及您可以分析和查看过去作为索引人的表现的图表。 -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +本节还将包括有关您的净索引人奖励和净查询费用的详细信息。 您将看到以下指标: -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- 已委托股份 - 委托人的股份,您可以分配但不能被削减 +- 总查询费用 - 用户在一段时间内为您提供的查询支付的总费用 +- 索引人奖励- 您收到的 索引人奖励总额,以 GRT 为单位 +- 费用削减 - 当您与委托人拆分时,您将保留的查询费用回扣百分比 +- 奖励削减 - 与委托人拆分时您将保留的索引人奖励的百分比 +- 已拥有 - 您存入的股份,可能会因恶意或不正确的行为而被削减 ![Explorer Image 12](/img/Indexer-Stats.png) -### Delegating Tab +### 委托标签 -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +委托人对 The Graph 网络很重要。 委托人必须利用他们的知识来选择能够提供健康回报的索引人。 在这里,您可以找到您的活动和历史委托的详细信息,以及您委托给的索引人的指标。 -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +在页面的前半部分,您可以看到您的委托图表,以及仅奖励图表。 在左侧,您可以看到反映您当前委托指标的 KPI。 -The Delegator metrics you’ll see here in this tab include: +您将在此选项卡中看到的委托人指标包括: -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- 总委托奖励 +- 未实现的总奖励 +- 已实现的总奖励 -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +在页面的后半部分,您将看到委托标签。 在这里,您可以看到您委托给的索引人,以及它们的详细信息(例如奖励削减、冷却时间等)。 -With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. +通过表格右侧的按钮,你可以管理你的委托--更多的委托,取消委托,或在解冻期后撤回你的委托。 -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +使用表格右侧的按钮,您可以管理您的委托——在解冻期后增加委托、取消委托或撤回委托。 ![Explorer Image 13](/img/Delegation-Stats.png) -### Curating Tab +### 策展标签 -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +在 策展选项卡中,您将找到您正在发送信号的所有子图(从而使您能够接收查询费用)。 信号允许策展人向索引人突出显示哪些子图有价值和值得信赖,从而表明它们需要被索引。 -Within this tab, you’ll find an overview of: +在此选项卡中,您将找到以下内容的概述: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph -- Updated at date details +- 您正在策展的所有带有信号细节的子图 +- 每个子图的共享总数 +- 查询每个子图的奖励 +- 更新日期详情 ![Explorer Image 14](/img/Curation-Stats.png) -## Your Profile Settings +## 设置您的个人资料 -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +在您的用户配置文件中,您将能够管理您的个人配置文件详细信息(例如设置 ENS 名称)。 如果您是 索引人,则可以轻松访问更多设置。 在您的用户配置文件中,您将能够设置您的委托参数和操作员。 -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- 操作员代表索引人在协议中采取有限的操作,例如打开和关闭分配。 操作员通常是其他以太坊地址,与他们的抵押钱包分开,可以访问 索引人可以亲自设置的网络 +- 委托参数允许您控制 GRT 在您和您的委托人之间的分配。 ![Explorer Image 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, The Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +作为您进入去中心化数据世界的官方门户,无论您在网络中的角色如何,G​​raph 浏览器都允许您采取各种行动。 您可以通过打开地址旁边的下拉菜单进入您的个人资料设置,然后单击“设置”按钮。 -
![Wallet details](/img/Wallet-Details.png)
+
Wallet details
From 845c61b4e9a48caa9eddb8665924703d4de75bb6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Benoi=CC=82t=20Rouleau?= Date: Thu, 27 Jan 2022 20:24:12 -0500 Subject: [PATCH 241/241] Formatting fixes --- pages/ar/developer/create-subgraph-hosted.mdx | 26 +++++------ pages/ar/developer/querying-from-your-app.mdx | 4 +- pages/ar/explorer.mdx | 6 +-- pages/ar/indexing.mdx | 28 +++++------ pages/es/developer/create-subgraph-hosted.mdx | 32 ++++++------- pages/es/indexing.mdx | 38 +++++++-------- pages/ja/indexing.mdx | 46 +++++++++---------- pages/zh/curating.mdx | 10 ++-- .../hosted-service/query-hosted-service.mdx | 4 +- 9 files changed, 97 insertions(+), 97 deletions(-) diff --git a/pages/ar/developer/create-subgraph-hosted.mdx b/pages/ar/developer/create-subgraph-hosted.mdx index c93bf5f14604..56aca0d2313e 100644 --- a/pages/ar/developer/create-subgraph-hosted.mdx +++ b/pages/ar/developer/create-subgraph-hosted.mdx @@ -46,7 +46,7 @@ The Graph Network supports subgraphs indexing mainnet Ethereum: يعتمد Graph's Hosted Service على استقرار وموثوقية التقنيات الأساسية ، وهي نقاط JSON RPC endpoints. المتوفرة. سيتم تمييز الشبكات الأحدث على أنها في مرحلة beta حتى تثبت الشبكة نفسها من حيث الاستقرار والموثوقية وقابلية التوسع. خلال هذه الفترة beta ، هناك خطر حدوث عطل وسلوك غير متوقع. -تذكر أنك ** لن تكون قادرا ** على نشر subgraph يفهرس شبكة non-mainnet لـ شبكة Graph اللامركزية في \[Subgraph Studio \](/ studio / subgraph-studio). +تذكر أنك ** لن تكون قادرا ** على نشر subgraph يفهرس شبكة non-mainnet لـ شبكة Graph اللامركزية في [Subgraph Studio](/studio/subgraph-studio). ## من عقد موجود @@ -218,15 +218,15 @@ Null value resolved for non-null field 'name' ندعم المقاييس التالية في GraphQL API الخاصة بنا: -| النوع | الوصف | -| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | -| `ID` | يتم تخزينه كـ `string`. | -| `String` | لقيم `string`. لا يتم دعم اNull ويتم إزالتها تلقائيا. | -| `Boolean` | لقيم `boolean`. | -| `Int` | GraphQL spec تعرف `Int` بحجم 32 بايت. | -| `BigInt` | أعداد صحيحة كبيرة. يستخدم لأنواع Ethereum `uint32` ، `int64` ، `uint64` ، ... ، `uint256`. ملاحظة: كل شيء تحت `uint32` ، مثل `int32` أو `uint24` أو `int8` يتم تمثيله كـ `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. يتراوح نطاق الأس من −6143 إلى +6144. مقربة إلى 34 رقما. | +| النوع | الوصف | +| --- | --- | +| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | +| `ID` | يتم تخزينه كـ `string`. | +| `String` | لقيم `string`. لا يتم دعم اNull ويتم إزالتها تلقائيا. | +| `Boolean` | لقيم `boolean`. | +| `Int` | GraphQL spec تعرف `Int` بحجم 32 بايت. | +| `BigInt` | أعداد صحيحة كبيرة. يستخدم لأنواع Ethereum `uint32` ، `int64` ، `uint64` ، ... ، `uint256`. ملاحظة: كل شيء تحت `uint32` ، مثل `int32` أو `uint24` أو `int8` يتم تمثيله كـ `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a signficand and an exponent. يتراوح نطاق الأس من −6143 إلى +6144. مقربة إلى 34 رقما. | #### Enums @@ -458,7 +458,7 @@ query { ## كتابة الـ Mappings -The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. تتم كتابة الـ Mappings في مجموعة فرعية من [ TypeScript ](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) تسمى \[AssemblyScript \](https: //github.com/AssemblyScript/assemblyscript/wiki) والتي يمكن ترجمتها إلى WASM ([ WebAssembly ](https://webassembly.org/)). يعتبر AssemblyScript أكثر صرامة من TypeScript العادي ، ولكنه يوفر تركيبا مألوفا. +The mappings transform the Ethereum data your mappings are sourcing into entities defined in your schema. تتم كتابة الـ Mappings في مجموعة فرعية من [ TypeScript ](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) تسمى \[AssemblyScript \](https://github.com/AssemblyScript/assemblyscript/wiki) والتي يمكن ترجمتها إلى WASM ([ WebAssembly ](https://webassembly.org/)). يعتبر AssemblyScript أكثر صرامة من TypeScript العادي ، ولكنه يوفر تركيبا مألوفا. لكل معالج حدث تم تعريفه في `subgraph.yaml` ضمن `mapping.eventHandlers` ، قم بإنشاء دالة صادرة بنفس الاسم. يجب أن يقبل كل معالج بارمترا واحدا يسمى `event` بنوع مطابق لاسم الحدث الذي تتم معالجته. @@ -627,7 +627,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > ** ملاحظة: ** مصدر البيانات الجديد سيعالج فقط الاستدعاءات والأحداث للكتلة التي تم إنشاؤها فيه وجميع الكتل التالية ، ولكنه لن يعالج البيانات التاريخية ، أي البيانات الموجودة في الكتل السابقة. -> +> > إذا كانت الكتل السابقة تحتوي على بيانات ذات صلة بمصدر البيانات الجديد ، فمن الأفضل فهرسة تلك البيانات من خلال قراءة الحالة الحالية للعقد وإنشاء كيانات تمثل تلك الحالة في وقت إنشاء مصدر البيانات الجديد. ### سياق مصدر البيانات @@ -684,7 +684,7 @@ dataSources: ``` > ** ملاحظة: ** يمكن البحث عن كتلة إنشاء العقد بسرعة على Etherscan: -> +> > 1. ابحث عن العقد بإدخال عنوانه في شريط البحث. > 2. انقر فوق hash إجراء الإنشاء في قسم `Contract Creator`. > 3. قم بتحميل صفحة تفاصيل الإجراء حيث ستجد كتلة البدء لذلك العقد. diff --git a/pages/ar/developer/querying-from-your-app.mdx b/pages/ar/developer/querying-from-your-app.mdx index f3decc0d1768..41970d4d6bc8 100644 --- a/pages/ar/developer/querying-from-your-app.mdx +++ b/pages/ar/developer/querying-from-your-app.mdx @@ -24,7 +24,7 @@ Here are a couple of the more popular GraphQL clients in the ecosystem and how t ### Apollo client -[Apoolo client ](https://www.apollographql.com/docs/)يدعم مشاريع الويب بما في ال framework مثل React و Vue ، بالإضافة إلى mobile clients مثل iOS و Android و React Native. +[Apoolo client](https://www.apollographql.com/docs/)يدعم مشاريع الويب بما في ال framework مثل React و Vue ، بالإضافة إلى mobile clients مثل iOS و Android و React Native. لنلقِ نظرة على كيفية جلب البيانات من Subgraph وذلك باستخدام Apollo client في مشروع ويب. @@ -100,7 +100,7 @@ client ### URQL -هناك خيار آخر وهو [ URQL ](https://formidable.com/open-source/urql/) ، وهي مكتبة GraphQL client أخف وزنا إلى حد ما. +هناك خيار آخر وهو [URQL](https://formidable.com/open-source/urql/) ، وهي مكتبة GraphQL client أخف وزنا إلى حد ما. لنلقِ نظرة على كيفية جلب البيانات من Subgraph باستخدام URQL في مشروع ويب. diff --git a/pages/ar/explorer.mdx b/pages/ar/explorer.mdx index ae31b016d8a4..7c9bb396474a 100644 --- a/pages/ar/explorer.mdx +++ b/pages/ar/explorer.mdx @@ -11,7 +11,7 @@ title: مستكشف title="مشغل فيديو يوتيوب" frameBorder="0" allowFullScreen -> + >
## Subgraphs @@ -60,7 +60,7 @@ Let’s start with the Indexers. دعونا نبدأ مع المفهرسين ا يمكن للمفهرسين كسب كلا من رسوم الاستعلام ومكافآت الفهرسة. يحدث هذا عندما يقوم المشاركون في الشبكة بتفويض GRT للمفهرس. يتيح ذلك للمفهرسين تلقي رسوم الاستعلام ومكافآت بناء على بارامترات المفهرس الخاصة به. يتم تعيين بارامترات الفهرسة عن طريق النقر على الجانب الأيمن من الجدول ، أو بالانتقال إلى ملف تعريف المفهرس والنقر فوق زر "Delegate". -لمعرفة المزيد حول كيفية أن تصبح مفوضا كل ما عليك فعله هو التوجه إلى [ الوثائق الرسمية ](/delegating) أو [ أكاديمية The Graph ](https://docs.thegraph.academy/network/delegators). +لمعرفة المزيد حول كيفية أن تصبح مفوضا كل ما عليك فعله هو التوجه إلى [الوثائق الرسمية](/delegating) أو [ أكاديمية The Graph ](https://docs.thegraph.academy/network/delegators). ![Indexing details pane](/img/Indexing-Details-Pane.png) @@ -76,7 +76,7 @@ Let’s start with the Indexers. دعونا نبدأ مع المفهرسين ا ![صورة المستكشف 6](/img/Curation-Overview.png) -إذا كنت تريد معرفة المزيد عن دور المنسق ، فيمكنك القيام بذلك عن طريق زيارة الروابط التالية ـ [ أكاديمية The Graph ](https://thegraph.academy/curators/) أو \[ الوثائق الرسمية. \](/ curating) +إذا كنت تريد معرفة المزيد عن دور المنسق ، فيمكنك القيام بذلك عن طريق زيارة الروابط التالية ـ [ أكاديمية The Graph ](https://thegraph.academy/curators/) أو \[ الوثائق الرسمية. \](/curating) ### 3. المفوضون Delegators diff --git a/pages/ar/indexing.mdx b/pages/ar/indexing.mdx index e77c0cb33880..e6ad889e20a5 100644 --- a/pages/ar/indexing.mdx +++ b/pages/ar/indexing.mdx @@ -115,7 +115,7 @@ query indexerAllocations { - **كبيرة** - مُعدة لفهرسة جميع ال subgraphs المستخدمة حاليا وأيضا لخدمة طلبات حركة مرور البيانات ذات الصلة. | Setup | (CPUs) | (memory in GB) | (disk in TBs) | (CPUs) | (memory in GB) | -| ----- |:------:|:--------------:|:-------------:|:------:|:--------------:| +| ----- | :----: | :------------: | :-----------: | :----: | :------------: | | صغير | 4 | 8 | 1 | 4 | 16 | | قياسي | 8 | 30 | 1 | 12 | 48 | | متوسط | 16 | 64 | 2 | 32 | 64 | @@ -149,20 +149,20 @@ query indexerAllocations { #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| --- | --- | --- | --- | --- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### خدمة المفهرس -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| --- | --- | --- | --- | --- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -427,7 +427,7 @@ docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -**ملاحظة**: بعد بدء ال containers ، يجب أن تكون خدمة المفهرس متاحة على [http: // localhost: 7600 ](http://localhost:7600) ويجب على وكيل المفهرس عرض API إدارة المفهرس على [ http: // localhost: 18000 / ](http://localhost:18000/). +**ملاحظة**: بعد بدء ال containers ، يجب أن تكون خدمة المفهرس متاحة على [http://localhost:7600](http://localhost:7600) ويجب على وكيل المفهرس عرض API إدارة المفهرس على [http://localhost:18000/](http://localhost:18000/). ```sh # Indexer service @@ -617,7 +617,7 @@ indexer cost set model my_model.agora ### Stake in the protocol -الخطوات الأولى للمشاركة في الشبكة كمفهرس هي الموافقة على البروتوكول وصناديق الأسهم، و (اختياريا) إعداد عنوان المشغل لتفاعلات البروتوكول اليومية. _ ** ملاحظة **: لأغراض الإرشادات ، سيتم استخدام Remix للتفاعل مع العقد ، ولكن لا تتردد في استخدام الأداة التي تختارها (\[OneClickDapp \](https: // oneclickdapp.com/) و [ ABItopic ](https://abitopic.io/) و [ MyCrypto ](https://www.mycrypto.com/account) وهذه بعض الأدوات المعروفة)._ +الخطوات الأولى للمشاركة في الشبكة كمفهرس هي الموافقة على البروتوكول وصناديق الأسهم، و (اختياريا) إعداد عنوان المشغل لتفاعلات البروتوكول اليومية. _ ** ملاحظة **: لأغراض الإرشادات ، سيتم استخدام Remix للتفاعل مع العقد ، ولكن لا تتردد في استخدام الأداة التي تختارها (\[OneClickDapp \](https://oneclickdapp.com/) و [ABItopic](https://abitopic.io/) و [MyCrypto](https://www.mycrypto.com/account) وهذه بعض الأدوات المعروفة)._ بعد أن تم إنشاؤه بواسطة المفهرس ، يمر التخصيص السليم عبر أربع حالات. diff --git a/pages/es/developer/create-subgraph-hosted.mdx b/pages/es/developer/create-subgraph-hosted.mdx index d6bb245d55c1..bb9dc04a9df1 100644 --- a/pages/es/developer/create-subgraph-hosted.mdx +++ b/pages/es/developer/create-subgraph-hosted.mdx @@ -138,7 +138,7 @@ Las entradas importantes a actualizar para el manifiesto son: - `dataSources.mapping.callHandlers`: enumera las funciones de contrato inteligente a las que reacciona este subgrafo y los handlers en el mapeo que transforman las entradas y salidas a las llamadas de función en entidades en el almacén. -- `dataSources.mapping.blockHandlers`: enumera los bloques a los que reacciona este subgrafo y los handlers en el mapeo que se ejecutan cuando un bloque se agrega a la cadena. Sin un filtro, el handler de bloque se ejecutará en cada bloque. Se puede proporcionar un filtro opcional con los siguientes tipos: call`. Un filtro`call` ejecutará el handler si el bloque contiene al menos una llamada al contrato de la fuente de datos. +- `dataSources.mapping.blockHandlers`: enumera los bloques a los que reacciona este subgrafo y los handlers en el mapeo que se ejecutan cuando un bloque se agrega a la cadena. Sin un filtro, el handler de bloque se ejecutará en cada bloque. Se puede proporcionar un filtro opcional con los siguientes tipos: `call`. Un filtro `call` ejecutará el handler si el bloque contiene al menos una llamada al contrato de la fuente de datos. Un único subgrafo puede indexar datos de múltiples contratos inteligentes. Añade una entrada por cada contrato del que haya que indexar datos a la array `dataSources`. @@ -218,15 +218,15 @@ Cada entidad debe tener un campo `id`, que es de tipo `ID!` (string). El campo ` Admitimos los siguientes scalars en nuestra API GraphQL: -| Tipo | Descripción | -| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y addresses de Ethereum. | -| `ID` | Almacenado como un `string`. | -| `String` | Scalar para valores `string`. Los caracteres null no se admiten y se eliminan automáticamente. | -| `Boolean` | Scalar para valores `boolean`. | -| `Int` | The GraphQL spec define `Int` para tener un tamano de 32 bytes. | -| `BigInt` | Números enteros grandes. Usados para los tipos `uint32`, `int64`, `uint64`, ..., `uint256` de Ethereum. Nota: Todo debajo de `uint32`, como `int32`, `uint24` o `int8` es representado como `i32`. | -| `BigDecimal` | `BigDecimal` Decimales de alta precisión representados como un signo y un exponente. El rango de exponentes va de -6143 a +6144. Redondeado a 34 dígitos significativos. | +| Tipo | Descripción | +| --- | --- | +| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y addresses de Ethereum. | +| `ID` | Almacenado como un `string`. | +| `String` | Scalar para valores `string`. Los caracteres null no se admiten y se eliminan automáticamente. | +| `Boolean` | Scalar para valores `boolean`. | +| `Int` | The GraphQL spec define `Int` para tener un tamano de 32 bytes. | +| `BigInt` | Números enteros grandes. Usados para los tipos `uint32`, `int64`, `uint64`, ..., `uint256` de Ethereum. Nota: Todo debajo de `uint32`, como `int32`, `uint24` o `int8` es representado como `i32`. | +| `BigDecimal` | `BigDecimal` Decimales de alta precisión representados como un signo y un exponente. El rango de exponentes va de -6143 a +6144. Redondeado a 34 dígitos significativos. | #### Enums @@ -451,10 +451,10 @@ Diccionarios de idiomas admitidos: Algoritmos admitidos para ordenar los resultados: -| Algoritmos | Descripción | -| ------------------- | -------------------------------------------------------------------------------------------------- | -| rango | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | -| rango de Proximidad | Similar al rango, pero también incluye la proximidad de los matches. | +| Algoritmos | Descripción | +| --- | --- | +| rango | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | +| rango de Proximidad | Similar al rango, pero también incluye la proximidad de los matches. | ## Escribir Mapeos @@ -627,7 +627,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Nota:** Un nuevo origen de datos sólo procesará las llamadas y los eventos del bloque en el que fue creado y todos los bloques siguientes, pero no procesará los datos históricos, es decir, los datos que están contenidos en bloques anteriores. -> +> > Si los bloques anteriores contienen datos relevantes para la nueva fuente de datos, lo mejor es indexar esos datos leyendo el estado actual del contrato y creando entidades que representen ese estado en el momento de crear la nueva fuente de datos. ### Contexto de la Fuente de Datos @@ -684,7 +684,7 @@ dataSources: ``` > **Nota:** El bloque de creación del contrato se puede buscar rápidamente en Etherscan: -> +> > 1. Busca el contrato introduciendo su dirección en la barra de búsqueda. > 2. Haz clic en el hash de la transacción de creación en la sección `Contract Creator`. > 3. Carga la página de detalles de la transacción, donde encontrarás el bloque inicial de ese contrato. diff --git a/pages/es/indexing.mdx b/pages/es/indexing.mdx index 2485f360b904..1e1c92633f0b 100644 --- a/pages/es/indexing.mdx +++ b/pages/es/indexing.mdx @@ -115,11 +115,11 @@ Los indexadores pueden diferenciarse aplicando técnicas avanzadas para tomar de - **Grande**: Preparado para indexar todos los subgrafos utilizados actualmente y atender solicitudes para el tráfico relacionado. | Configuración | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| ------------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| -| Pequeño | 4 | 8 | 1 | 4 | 16 | -| Estándar | 8 | 30 | 1 | 12 | 48 | -| Medio | 16 | 64 | 2 | 32 | 64 | -| Grande | 72 | 468 | 3,5 | 48 | 184 | +| --- | :-: | :-: | :-: | :-: | :-: | +| Pequeño | 4 | 8 | 1 | 4 | 16 | +| Estándar | 8 | 30 | 1 | 12 | 48 | +| Medio | 16 | 64 | 2 | 32 | 64 | +| Grande | 72 | 468 | 3,5 | 48 | 184 | ### ¿Cuáles son algunas de las precauciones de seguridad básicas que debe tomar un indexador? @@ -149,24 +149,24 @@ Nota: Para admitir el escalado ágil, se recomienda que las inquietudes de consu #### Graph Node -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| ------ | ---------------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------- | -| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | -| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | -| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| --- | --- | --- | --- | --- | +| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | +| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | +| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | #### Servicio de Indexador -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| ------ | ----------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | Servidor HTTP GraphQL
(para consultas de subgrafo pagadas) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| --- | --- | --- | --- | --- | +| 7600 | Servidor HTTP GraphQL
(para consultas de subgrafo pagadas) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Métricas de Prometheus | /metrics | --metrics-port | - | #### Agente Indexador -| Puerto | Objeto | Rutas | Argumento CLI | Variable de
Entorno | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de
Entorno | | ------ | ----------------------------- | ----- | ------------------------- | --------------------------------------- | | 8000 | API de gestión de indexadores | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | @@ -262,7 +262,7 @@ EOF #### Usa Terraform para crear infraestructura -Antes de ejecutar cualquier comando, lee [ variables.tf ](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) y crea un archivo `terraform.tfvars` en este directorio (o modifica el que creamos en el último paso). Para cada variable en la que deseas anular el valor predeterminado, o donde necesites establecer un valor, ingresa una configuración en `terraform.tfvars`. +Antes de ejecutar cualquier comando, lee [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) y crea un archivo `terraform.tfvars` en este directorio (o modifica el que creamos en el último paso). Para cada variable en la que deseas anular el valor predeterminado, o donde necesites establecer un valor, ingresa una configuración en `terraform.tfvars`. - Ejecuta los siguientes comandos para crear la infraestructura. @@ -617,7 +617,7 @@ indexer cost set model my_model.agora ### Participar en el protocolo -Los primeros pasos para participar en la red como Indexador son aprobar el protocolo, stakear fondos y (opcionalmente) configurar una dirección de operador para las interacciones diarias del protocolo. _ **Nota**: A los efectos de estas instrucciones, Remix se utilizará para la interacción del contrato, pero no dudes en utilizar la herramienta que elijas (\[OneClickDapp\](https: // oneclickdapp.com/), [ABItopic](https://abitopic.io/) y [MyCrypto](https://www.mycrypto.com/account) son algunas otras herramientas conocidas)._ +Los primeros pasos para participar en la red como Indexador son aprobar el protocolo, stakear fondos y (opcionalmente) configurar una dirección de operador para las interacciones diarias del protocolo. _ **Nota**: A los efectos de estas instrucciones, Remix se utilizará para la interacción del contrato, pero no dudes en utilizar la herramienta que elijas (\[OneClickDapp\](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) y [MyCrypto](https://www.mycrypto.com/account) son algunas otras herramientas conocidas)._ Después de ser creada por un indexador, una asignación saludable pasa por cuatro estados. diff --git a/pages/ja/indexing.mdx b/pages/ja/indexing.mdx index e02be5538cbc..dafa47e72922 100644 --- a/pages/ja/indexing.mdx +++ b/pages/ja/indexing.mdx @@ -26,7 +26,7 @@ import { Difficulty } from '@/components' ### 報酬の分配方法は? -インデキシング報酬は、年間 3%の発行量に設定されているプロトコル・インフレから得られます。 報酬は、それぞれのサブグラフにおけるすべてのキュレーション・シグナルの割合に基づいてサブグラフに分配され、そのサブグラフに割り当てられたステークに基づいてインデクサーに分配されます。 **特典を受けるためには、仲裁憲章で定められた基準を満たす有効なPOI(Proof of Indexing)で割り当てを終了する必要があります。** +インデキシング報酬は、年間 3%の発行量に設定されているプロトコル・インフレから得られます。 報酬は、それぞれのサブグラフにおけるすべてのキュレーション・シグナルの割合に基づいてサブグラフに分配され、そのサブグラフに割り当てられたステークに基づいてインデクサーに分配されます。 **特典を受けるためには、仲裁憲章で定められた基準を満たす有効な POI(Proof of Indexing)で割り当てを終了する必要があります。** コミュニティでは、報酬を計算するための数多くのツールが作成されており、それらは[コミュニティガイドコレクション](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c)にまとめられています。 また、[Discord サーバー](https://discord.gg/vtvv7FP)の#delegators チャンネルや#indexers チャンネルでも、最新のツールリストを見ることができます。 @@ -65,13 +65,13 @@ Use Etherscan to call `getRewards()`: - Etherscan interface to Rewards contract に移動します。 * `getRewards()`を呼び出します - - **10を拡大します。 getRewards**のドロップダウン + - **10 を拡大します。 getRewards**のドロップダウン - 入力欄に**allocationID**を入力 - **Query**ボタンをクリック ### 争議(disputes)とは何で、どこで見ることができますか? -インデクサークエリとアロケーションは、期間中に The Graph 上で争議することができます。 争議期間は、争議の種類によって異なります。 クエリ/裁定には7エポックスの紛争窓口があり、割り当てには56エポックスがあります。 これらの期間が経過した後は、割り当てやクエリのいずれに対しても紛争を起こすことはできません。 紛争が開始されると、Fishermenは最低10,000GRTのデポジットを要求され、このデポジットは紛争が最終的に解決されるまでロックされます。 フィッシャーマンとは、紛争を開始するネットワーク参加者のことです。 +インデクサークエリとアロケーションは、期間中に The Graph 上で争議することができます。 争議期間は、争議の種類によって異なります。 クエリ/裁定には 7 エポックスの紛争窓口があり、割り当てには 56 エポックスがあります。 これらの期間が経過した後は、割り当てやクエリのいずれに対しても紛争を起こすことはできません。 紛争が開始されると、Fishermen は最低 10,000GRT のデポジットを要求され、このデポジットは紛争が最終的に解決されるまでロックされます。 フィッシャーマンとは、紛争を開始するネットワーク参加者のことです。 争議は UI のインデクサーのプロフィールページの`Disputes`タブで確認できます。 @@ -79,7 +79,7 @@ Use Etherscan to call `getRewards()`: - 争議が引き分けた場合、フィッシャーマンのデポジットは返還され、争議中のインデクサーはスラッシュされることはありません。 - 争議が受け入れられた場合、フィッシャーマンがデポジットした GRT は返却され、争議中のインデクサーはスラッシュされ、フィッシャーマンはスラッシュされた GRT の 50%を獲得します。 -紛争は、UIのインデクサーのプロファイルページの`紛争`タブで確認できます。 +紛争は、UI のインデクサーのプロファイルページの`紛争`タブで確認できます。 ### クエリフィーリベートとは何ですか、またいつ配布されますか? @@ -114,12 +114,12 @@ Use Etherscan to call `getRewards()`: - **Medium** - 100 個のサブグラフと 1 秒あたり 200 ~ 500 のリクエストをサポートするプロダクションインデクサー - **Large** - 現在使用されているすべてのサブグラフのインデックスを作成し、関連するトラフィックのリクエストに対応します -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| --- | :-: | :-: | :-: | :-: | :-: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### インデクサーが取るべきセキュリティ対策は? @@ -149,20 +149,20 @@ Use Etherscan to call `getRewards()`: #### グラフノード -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------------- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| --- | --- | --- | --- | --- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| --- | --- | --- | --- | --- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -448,7 +448,7 @@ docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -[Google Cloud で Terraform を使ってサーバーインフラを構築するのセクション ](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) を参照してください。 +[Google Cloud で Terraform を使ってサーバーインフラを構築するのセクション](/indexing#setup-server-infrastructure-using-terraform-on-google-cloud) を参照してください。 #### K8s と Terraform の使用 @@ -658,7 +658,7 @@ setDelegationParameters(950000, 600000, 500) ### アロケーションの寿命 -インデクサーによって作成された後、健全なアロケーションは4つの状態を経ます。 +インデクサーによって作成された後、健全なアロケーションは 4 つの状態を経ます。 - **Active**- オンチェーンでアロケーションが作成されると(allocateFrom())、それは**active**であるとみなされます。 インデクサー自身やデリゲートされたステークの一部がサブグラフの配置に割り当てられ、これによりインデクシング報酬を請求したり、そのサブグラフの配置のためにクエリを提供したりすることができます。 インデクサエージェントは、インデキシングルールに基づいて割り当ての作成を管理します。 diff --git a/pages/zh/curating.mdx b/pages/zh/curating.mdx index 66ed9fe2bd2a..774fac8c90ee 100644 --- a/pages/zh/curating.mdx +++ b/pages/zh/curating.mdx @@ -6,7 +6,7 @@ title: 策展 在发出信号时,策展人可以决定在子图的一个特定版本上发出信号,或者使用自动迁移发出信号。 当使用自动迁移发出信号时,策展人的份额将始终升级到由开发商发布的最新版本。 如果你决定在一个特定的版本上发出信号,股份将始终保持在这个特定的版本上。 -Remember that curation is risky. 请做好你的工作,确保你在你信任的子图上进行策展。 请做好你的工作,确保你在你信任的子图上进行策展。 创建子图是没有权限的,所以人们可以创建子图,并称其为任何他们想要的名字。 关于策展风险的更多指导,请查看 [The Graph Academy 的策展指南。 ](https://thegraph.academy/curators/) +Remember that curation is risky. 请做好你的工作,确保你在你信任的子图上进行策展。 请做好你的工作,确保你在你信任的子图上进行策展。 创建子图是没有权限的,所以人们可以创建子图,并称其为任何他们想要的名字。 关于策展风险的更多指导,请查看 [The Graph Academy 的策展指南。](https://thegraph.academy/curators/) ## 联合曲线 101 @@ -33,7 +33,7 @@ Remember that curation is risky. 请做好你的工作,确保你在你信任 ## 如何进行信号处理 -现在我们已经介绍了关于粘合曲线如何工作的基本知识,这就是你将如何在子图上发出信号。 在 The Graph 资源管理器的策展人选项卡中,策展人将能够根据网络统计数据对某些子图发出信号和取消信号。 关于如何在资源管理器中做到这一点的一步步概述,请[点击这里。 ](https://thegraph.com/docs/explorer) +现在我们已经介绍了关于粘合曲线如何工作的基本知识,这就是你将如何在子图上发出信号。 在 The Graph 资源管理器的策展人选项卡中,策展人将能够根据网络统计数据对某些子图发出信号和取消信号。 关于如何在资源管理器中做到这一点的一步步概述,请[点击这里。](https://thegraph.com/docs/explorer) 策展人可以选择在特定的子图版本上发出信号,或者他们可以选择让他们的策展份额自动迁移到该子图的最新生产版本。 这两种策略都是有效的,都有各自的优点和缺点。 @@ -49,7 +49,7 @@ Remember that curation is risky. 请做好你的工作,确保你在你信任 因此,如果索引人不得不猜测他们应该索引哪些子图,那么他们赚取强大的查询费用的机会就会很低,因为他们没有办法验证哪些子图是高质量的。 进入策展阶段。 -策展人使 The Graph 网络变得高效,信号是策展人用来让索引人知道一个子图是好的索引的过程,其中 GRT 被存入子图的粘合曲线。 索引人可以从本质上信任策展人的信号,因为一旦发出信号,策展人就会为该子图铸造一个策展份额,使他们有权获得该子图所带来的部分未来查询费用。 策展人的信号以ERC20代币的形式表示,称为Graph Curation Shares(GCS)。 想赚取更多查询费的策展人应该向他们预测会给网络带来大量费用的子图发出他们的 GRT 信号。 策展人不能因为不良行为而被砍掉,但有一个对策展人的存款税,以抑制可能损害网络完整性的不良决策。 如果策展人选择在一个低质量的子图上进行策展,他们也会赚取较少的查询费,因为有较少的查询需要处理,或者有较少的索引人处理这些查询。 请看下面的图! +策展人使 The Graph 网络变得高效,信号是策展人用来让索引人知道一个子图是好的索引的过程,其中 GRT 被存入子图的粘合曲线。 索引人可以从本质上信任策展人的信号,因为一旦发出信号,策展人就会为该子图铸造一个策展份额,使他们有权获得该子图所带来的部分未来查询费用。 策展人的信号以 ERC20 代币的形式表示,称为 Graph Curation Shares(GCS)。 想赚取更多查询费的策展人应该向他们预测会给网络带来大量费用的子图发出他们的 GRT 信号。 策展人不能因为不良行为而被砍掉,但有一个对策展人的存款税,以抑制可能损害网络完整性的不良决策。 如果策展人选择在一个低质量的子图上进行策展,他们也会赚取较少的查询费,因为有较少的查询需要处理,或者有较少的索引人处理这些查询。 请看下面的图! ![Signaling diagram](/img/curator-signaling.png) @@ -64,7 +64,7 @@ Remember that curation is risky. 请做好你的工作,确保你在你信任 3. 当策展人烧掉他们的股份以提取 GRT 时,剩余股份的 GRT 估值将被降低。 请注意,在某些情况下,策展人可能决定 **一次性**烧掉他们的股份。 这种情况可能很常见,如果一个 dApp 开发者停止版本/改进和查询他们的子图,或者如果一个子图失败。 因此,剩下的策展人可能只能提取他们最初 GRT 的一小部分。 关于风险较低的网络角色,请看委托人 \[Delegators\](https://thegraph.com/docs/delegating). 4. 一个子图可能由于错误而失败。 一个失败的子图不会累积查询费用。 因此,你必须等待,直到开发人员修复错误并部署一个新的版本。 - 如果你订阅了一个子图的最新版本,你的股份将自动迁移到该新版本。 这将产生 0.5%的策展税。 - - 如果你已经在一个特定的子图版本上发出信号,但它失败了,你将不得不手动烧毁你的策展税。 请注意,你可能会收到比你最初存入策展曲线更多或更少的 GRT,这是作为策展人的相关风险。 然后你可以在新的子图版本上发出信号,从而产生1%的策展税。 + - 如果你已经在一个特定的子图版本上发出信号,但它失败了,你将不得不手动烧毁你的策展税。 请注意,你可能会收到比你最初存入策展曲线更多或更少的 GRT,这是作为策展人的相关风险。 然后你可以在新的子图版本上发出信号,从而产生 1%的策展税。 ## 策展常见问题 @@ -100,5 +100,5 @@ Remember that curation is risky. 请做好你的工作,确保你在你信任 title="YouTube video player" frameBorder="0" allowFullScreen -> + >
diff --git a/pages/zh/hosted-service/query-hosted-service.mdx b/pages/zh/hosted-service/query-hosted-service.mdx index ad41c4bede90..3655c0268dda 100644 --- a/pages/zh/hosted-service/query-hosted-service.mdx +++ b/pages/zh/hosted-service/query-hosted-service.mdx @@ -4,7 +4,7 @@ title: 查询托管服务 部署子图后,请访问[托管服务](https://thegraph.com/hosted-service/) 以打开 [GraphiQL](https://github.com/graphql/graphiql) 界面,您可以在其中通过发出查询和查看数据模式来探索已经部署的子图的 GraphQL API。 -下面提供了一个示例,但请参阅 [查询 API ](/developer/graphql-api) 以获取有关如何查询子图实体的完整参考。 +下面提供了一个示例,但请参阅 [查询 API](/developer/graphql-api) 以获取有关如何查询子图实体的完整参考。 #### 示例 @@ -21,7 +21,7 @@ title: 查询托管服务 ## 使用托管服务 -Graph Explorer 及其 GraphQL playground是探索和查询托管服务上部署的子图的有用方式。 +Graph Explorer 及其 GraphQL playground 是探索和查询托管服务上部署的子图的有用方式。 下面详细介绍了一些主要功能: