Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prod Release 06/06/2024 #768

Merged
merged 16 commits into from
Jun 6, 2024
Merged

Prod Release 06/06/2024 #768

merged 16 commits into from
Jun 6, 2024

Conversation

darunrs
Copy link
Collaborator

@darunrs darunrs commented Jun 5, 2024

Bug Fixes:

  • Socket Hangup error now contains proper stack trace
  • Errors which terminate indexer execution are properly displayed in logs rather than as [Object object]
  • Fixed any broken links due to the migration to dev.near.org

Changes:

  • Indexer forks are now stored and added to contract
  • Indexer Management server endpoint opened and improvements to indexer state made
  • Utility classes for handling graphQL calls and Bitmap operations added to Block Streamer

Feature Releases:

  • New Logs Page!
    • Leverages new logs tables leading to more enriched filtering and searching as well as faster response times
    • Table formatting improved to reduce clutter and group relevant text
    • Search bar now searches all logs rather than just logs on same page
    • New toolbar added to provide filtering on logs
    • Status and latest block height now refresh as well as logs
    • More improvements to come!
  • New Indexer Editor Page!
    • Editor page UI overhauled to improve visibility of all the different buttons that can be used on the page
    • Color theme and styling unified across banners
    • Code editor no longer clips bottom of page

morgsmccauley and others added 13 commits May 9, 2024 09:28
Rejected promises must be handled via `await` or `.catch()`, without
either of these Node will exit due to the "unhandled rejection".
Currently, Lake S3 requests are created in advance, and then handled
later when the block is ready to be executed. Usually, this is not an
issue, as the delay between promise creation/handling is not too large.
But for slow indexers (i.e. `nearpavel_near/bitmap_v2`), this delay is
long enough for Node to consider it "unhandled".

This PR attaches a rejection handler to block requests, so that failures
can be handled gracefully. This does not affect the "pre-fetch"
behaviour, these requests will still be executed ahead of time, but
failed requests will be handled within our code. Explicit handling binds
the error to our execution context providing a meaningful call-stack, as
opposed to a lonesome error which seemingly comes from no-where.

To mitigate the problem itself - failed S3 requests, I have bumped
`maxAttempts`, and changed the `retryMode` in the hopes that the
transient error can be overcome.
Maintaining what Indexer was forked during development is useful for
various usage statistics. I've updated the frontend to now store what
indexer was forked from and store that information in the contract after
publishing. We specifically store the indexer which was directly forked.

With this information, we can construct a graph using the "forked_from"
as edge information, and calculate various other statistics such as how
many indexers used some other indexer as a base by doing path counts.
This can be done by creating an indexer which indexes the QueryApi
contract.

There are some bugs in the workflow currently. Specifically, refreshing
while in the forked page will cause the loss of whatever code was
written. You need to re-fork the Indexer. While this benefits to
ensuring the forked from field is correct, if we end up fixing this
behavior, we need to then ensure forked from is stored persistently.


As part of updating the contract, I've removed various old functions
which are no longer necessary.
This PR centralises persistent Indexer State within the
`IndexerStateManager` struct. Currently, we only persist the "stream
version", but this will soon grow to "enabled/disabled", which will be
implemented in my next PR. This is just a tidy up to make the next step
a bit easier.

Indexer state will be stored as stringified JSON under
`{account_id}/{function_name}:state`, currently this only includes when
the block stream was last synced. I've included a migration step in to
move from the old key/structure to the new.
This PR exposes a new gRPC endpoint from Coordinator to "manage"
indexers. Currently, this only allows for enabling/disabling, but will
probably be expanded over time. There isn't any intention to use this
from another service, it's more of a manual & internal tool that we can
use.

The endpoint is essentially just a wrapper over the persistent Redis
state. The exposed methods end up mutating this state, which in turn, is
then used to govern how Indexers should be synchronised.

Within the `coordinator/` directory, the endpoint can be used with
`grpcurl` like so:
- enable: `grpcurl -plaintext -proto proto/indexer_manager.proto -d
'{"account_id": "morgs.near", "function_name": "test"}' 0.0.0.0:8002
indexer.IndexerManager.Enable`
- disable: `grpcurl -plaintext -proto proto/indexer_manager.proto -d
'{"account_id": "morgs.near", "function_name": "test"}' 0.0.0.0:8002
indexer.IndexerManager.Disable`
- list: `grpcurl -plaintext -proto proto/indexer_manager.proto
0.0.0.0:8002 indexer.IndexerManager.List`
Hasura permissions were set without backend only mutations set. We no
longer want to allow users to make mutations to the indexer data as the
data should be consistent with the results of indexer code execution.
For non-select permissions, Runner now sets backend only mutations.
In Coordinator, to iterate over all Indexers you need two loops: one for
the accounts, and another for its functions. As iteration is quite
common, I've added a custom `Iterator` implementation which achieves the
same with a single loop.
The error given to Stream Handler is occasionally a JSON object in
disguise, and not an Error. As a result, calling error.toString()
returns `[Object object]` rather than the error contents. I've added a
check so that if the result of toString() is the earlier value, return a
JSON.stringify result instead. Doing JSON.stringify on a proper Error
type results in `{}`. The two cases must be handled separately.

To test this, I created test indexers which called one of the two pieces
of code which throw unawaited async errors. I verified both wrote the
error message and stack trace into the log table.

```
const timeoutPromise = new Promise((_, reject) => {
  setTimeout(() => {
      reject(new Error('Error thrown after 100ms'));
  }, 100);
});
```
```context.db.IndexerStorage.upsert({}, [], []); ```
Block Streamer will be querying block match data from an Indexer's
postgres tables through Hasura. Thus, Block Streamer needs to be able to
query and parse data returned by Hasura's graphQL APIs.

This PR introduces a crate which can generate code necessary to parse
returned data from a graphQL query. I've also created a struct which
encapsulates the code for making graphQL calls for ease of use when
integrating or mocking this feature in the future.
Introduce an improved logs table with new fields and the ability to
search and filter based of radio inputs.
The new website for [near.org](http://near.org/) is simply a marketing
website now served by this repo and will no longer serve as the gatway:
https://github.com/near/nearorg
The gateway now lives at [dev.near.org](http://dev.near.org/) and
remains the same: https://github.com/near/near-discovery

Looked for dead links in queryAPI repo. A few I avoided that used
alpha.near have not been modified.
Changed most near.org to dev.near.org
@typescript-eslint/eslint-plugin and @typescript-eslint/parser required
by your project and the versions required by ESLint. Specifically,
ESLint requires version ^8.56.0 of @typescript-eslint/parser, while your
project requires version ^7.9.0.


https://console.cloud.google.com/cloud-build/builds/1825a649-e414-4a72-8c68-5c50625d128f;step=0?project=pagoda-data-stack-dev
The bitmap indexer will return a list of bitmaps in the form of base 64
strings, and associated start block heights. We need a way to convert
all that data into a single block height and an associated bitmap.

This PR introduces a new BitmapOperator class which holds all the
operations necessary to perform the function of returning a combined
binary bitmap with the lowest start block height as index 0.
Disabled some lint rules causing build to fail. Run local build and it
succeeds.
@darunrs darunrs requested a review from a team as a code owner June 5, 2024 17:22
Kevin101Zhang and others added 3 commits June 5, 2024 18:13
improved indexingLogic components by the following:
1. Separation into view/container
2. Broke down larger components into more reusable and meaningful
counterparts
3. Adding TS to files
4. Redesigned visuals for IndexingLogic matching Logs themes (GCP)
5. Removed Unnecessary Folder Structure of "Forms" and it is coupled
tightly with the Modals
6. Removed dead code/Refactored Repeated Logic
Runner's Hasura Client test container was exceeding its 120s timeout for
the "starting API Server" log message. It turns out the start message
now uses a capital S, which was the cause of the error. Changing this
allowed the integ tests to work again.
…Minor Bugs (#775)

Enhanced the logs table by adding contextual information such as
timestamps and the number of blocks from NEAR's latest block for
developers on the Logs Table.
@darunrs darunrs merged commit d84c2ae into stable Jun 6, 2024
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants