Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analytics/Explorer Indexer #167

Closed
trajan0x opened this issue Sep 5, 2022 · 0 comments
Closed

Analytics/Explorer Indexer #167

trajan0x opened this issue Sep 5, 2022 · 0 comments

Comments

@trajan0x
Copy link
Contributor

trajan0x commented Sep 5, 2022

Overview

With the near completion of Scribe (#114), we're ready to start indexing events for our analytics api. The current state of the analytics api is quite convoluted. analytics.synapseprotocol.com is currently broken on several chains and missing lots of data. You can see that code here along w/ the explorer code here.

A second iteration of analytics, comprised of synapse-indexer and analytics-api requires too much complexity/is too stateful to deploy (which was part of the motivation for #114 along with issues like #153 popping up all over the place rather than in one place where they can be fixed all at once).

The finished product will be a graphql api that looks like this over go, but the first step is to replicate the indexer.

Let's walk through a few real bridging transactions and how they should be indexed. Since this is you're first contribution, I'll run through some steps to get started further below:

The Indexing Process

Here we take an example from the live bridge and walk through the indexing process. Your indexer will take a yaml config file that should look something like the following. I only define two chains since those are the two used for the example. These config values will make sense as I go through the example

The Config:

chains:
   - id: 1 # chain id 
     url: "http://127.0.0.1:8545" # rpc url
     contracts: 
       # this is a list since in some cases we have multiple versions of the same contract. You'll need to define these as an enum somewhere
       - type: bridge
       # this will be sourced by the person writing the config for abi.receipt. blockNumber, e.g. this is from https://github.com/synapsecns/synapse-contracts/blob/master/deployments/mainnet/SynapseBridge.json
         start_block:  13033669
       # some contracts (really only bridgeconfig/poolconfig: an older iteration of bridge config) are only on ethereum
        - type:  "bridgeconfig"
          address: "0x5217c83ca75559B1f8a8803824E5b7ac233A12a1"
          # see: https://github.com/synapsecns/synapse-contracts/blob/master/deployments/mainnet/BridgeConfigV3.json#L1100
          start_block: 14259367
       # an older verison of bridge config
       - type: "bridgeconfig"
         address: "0xAE908bb4905bcA9BdE0656CC869d0F23e77875E7"
         start_block: 13949327
        # when we start using v3. 
         end_block: 14259367
   - id: 42161
     url: "http://127.0.0.1:8546"
     contracts:
        - type: bridge
           # see: [0x6F4e8eBa4D337f874Ab57478AcC2Cb5BACdc19c9](https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/SynapseBridge.json#L2)
          address: "0x6F4e8eBa4D337f874Ab57478AcC2Cb5BACdc19c9"
          # see: https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/SynapseBridge.json#L1462
          start_block:  657404
       - type: pool
         # https://github.com/synapsecns/synapse-contracts/blob/master/deployments/arbitrum/nUSDPoolV3.json#L2
         address: 0x9Dd329F5411466d9e0C488fF72519CA9fEf0cb40
        # see: https://arbiscan.io/tx/0x500afe6cf8e927ccad7a8a2e01f7d3bfc2fa9ef3af6a55f841d71bd5b62c84d3, older deploys don't have the receipt so we pull it from the top right corner of the contract address in the explorer, arbiscan in this case
         start_block: 5152261
# url of the scribe service, should probably also be embedable 
scribe: http://scribe:1231

The Example:

Let's look at a live example. Here is a transaction which occured on arbitrum. As we can see from the data, the user is bridging to ethereum.

Note: I've chosen the most complicated bridge type here, other types such as mint do not require bridgeconfig, etc

image

Bridge Parsing

This transaction is going to trigger a few events that will get populated in the contracts we watch on scribe. The first is the bridge event. This particular event triggered on the bridge is TokenRedeemAndRemove We can see it contains the following items:

image

We now know that on ethereum, 0x59719d517208b306eA9c7a9FD90D6215163323Ee will receive a minimum of 5330566953 nusd (which will then be swapped for tokenIndexTo: 0 which is usdc) before 1662394851 (Monday, September 5, 2022 4:20:51 PM) on ethereum (chain id 1). If the swap can't be completed, the user will receive nusd on the other end which they can then trade for any token in the pool.

We can also look at the raw data (for most transactions, txes triggered this can't be used for indexing because other contracts can call ours, but it is helpful for understanding the flow) and see the method called:

image

Pool Parsing:

Since it's a swapAndRedeemAndRemove, we can see exactly what methods are called for the contract to execute in L2BridgeZap:

image

In addition to being pased in the input, these are also passed as a log:
image that can be parsed by the abi we generated and inserted

We also have another event to index here: a swap.

We can see the raw swap data here:

image. If we look at the swap:

image

We can see exactly what happened here. We're going to want to index this so we can calculate pool volume.

The Receiving Chain

This transaction triggered a bridge that was then received at the other end. Let's take a look at the transaction here. We can see here that withdrawAndRemove was called.

One of the challenges of parsing transactions on the other end is the pool is never emitted directly:

image

We can see from the contract that in cases where the swap is not successful, we simply transfer the token (nusd in this case) to the user. Since there's nothing more to index here, we can finish up after just indexing the receiving TokenWithdrawAndRemove without any pool data.

Bridge Config

In cases where expectedOutput >= swapMinAmount (most cases), we'll also receive an event from a pool. But how do the validators know which pool to pass here? And why is the token different than the address on the origin chain)

This is where bridgeconfig comes in. Two calls are made to BridgeConfigV3, in your case, these should be archive calls at the block_number of the transaction. First we call getTokenID(0x2913E812Cf0dcCA30FB28E6Cac3d2DCFF4497688, 42161). This is the token address in the call above and the chain id from above. This should be called on 0x5217c83ca75559B1f8a8803824E5b7ac233A12a1 rather than the other bridge config since the the current block number is greater than start block. If this tx were between 13949327 and 14259367 we'd use 0xAE908bb4905bcA9BdE0656CC869d0F23e77875E7 instead.

We can try this out on etherscan here. This won't be an archive call, but it's good enough for us to see what happened, since bridge config hasn't changed in the meantime. We can see the tokenID is nusd:

image

Now, let's figure out the token address we want to use on chainID 1 using the token id we just got:

image

This data corresponds to this struct, in order:

image

We can see here the the token address 0x1b84765de8b7566e4ceaf4d0fd3c5af52d3dde4f matches nusd on ethreum. Since this transaction is a swap, we want to query the pool config as well to see what pool we've swapped (or attempted to swap on). Let's call getPoolconfig with the token address we received above:

image. We can see the first argument is nusd and the second is a SwapFlashLoan contract. This is where the swap from nusd to usdc happened in our contract.

If we go back to the event logs for the tx we're inspecting here we can see an event emitted by this contract:

image. Our topic map will tell is this is RemoveLiquidityOne. We'll need to store this for swap analytics. We can also see the amount of tokens the user actually received this way and use that for volume calculations.

image

We can also see from the logs a TokenWithdrawAndRemove events:

image. We'll want to index this.

One final thing to note. You can see the last indexed topic here is bytes32 kappa. Kappa is simply the keccac256(origin_tx_hash).

So in this transaction, we should've indexed the following:

  • TokenRedeemAndRemove: on arbitrum
  • TokenSwap: on arbitrum
  • [TokenWithdrawAndRemove](https://github.com/synapsecns/synapse-contracts/blob/9e390f7c826ab09c48c3c8fe3d040226ee8b3aa0/contracts/bridge/SynapseBridge.sol#L108): on ethereum
  • RemoveLiquidityOne

From this, we'll be able to compute a few things:

  • We can compute the price of usdc and nusd against usdc.
  • We can calculate the volume
  • We can calculate the fees earned

Steps to building the service

Abigen

First, you're going to create a new service in services/explorer, next you're going to need to generate some contracts. This readme will walk you through the process. (Note: prior to the merge of #166, you could've imported synapse-node and used its contracts. The topics file and bridge folder generally are worth referencing. I'd recommend adding the contracts repo as a submodule in order to abigen against them. I'd also reccomend giving the contracts a versioned name, as it's quite possible we'll have to generate multiple ersions in order to parse events against them. For instance, we've had several iterations of the BridgeConfig so far.

There are a few contracts you'll have to generate abi's for in order to succesfully track events from the bridge:

In general, all events from these contracts should be indexed in a standardize way (e.g. store all data in the db as structured data). Many of the bridge events are indexed here so you should straight up be able to copy and paste the code. Ordinarily, copying and pasting code is a big no-no, but in this case since we're deprecating synapse-node it's fine. Crucially, you'll need the topicMap and the standardized parsing

Config

Create a config parser for the config defined above, you should be able to use this file and the corresponding test. You'll use this to decide which contracts to index/their types.

Scribe Client

Create a graphql client against scribe, @CryptoMaxPlanck should be able to walk you through this, but your goal is to be able query continiously and index against the JSON. You're best bet here is going to be to use the raw JSON scalar and call UnmarshallJSON on the ethereum types, e.g. for logs this method. These can then be used to parse out events, like so

DB

I'd use DBService here for reference. You're going to want to store all these events in a format that they can easily be aggregated in real time. You'll need a tiny bit of additional data, namely the prices. I'd probably handle this with a sql join.

GraphQL server:

You should be able to staright up copy this schema. This doesn't include analytics methods, but should be a good start to the sever.

The server should be run independently of the indexer.

trajan0x added a commit that referenced this issue Sep 22, 2022
### Description
This PR inits services/explorer, an indexer and a service platform analytics. 
The specifics are as follows:
- basic contract generation via abigen (for contracts outlined in #167)
- basic config settings
- basic implementation of cli
- placeholders for db

**To Do**
- Verify Abigen process, add indexing functionality
- Integrate with scribe api
- More in [#167](#167)

### Metadata
Issue: [#167](#167)
PoIs: @trajan0x 

Co-authored-by: Trajan0x <trajan0x@users.noreply.github.com>
Co-authored-by: Max Planck <maxplanck.crypto@gmail.com>
@trajan0x trajan0x closed this as completed Feb 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant