Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactors the program parser trait and clean up some functions, #69

Merged
merged 10 commits into from
May 26, 2022

Conversation

austbot
Copy link
Collaborator

@austbot austbot commented May 23, 2022

I need to put Gummy roll crud back in place but I want to get this in front of the team before I spin wheels there.

things I did not do that need to be done.
I will do all these in seperate PRs
Generalize Storage -> ne pr
factor specific program logic out of batch mint functions -> new pr
add a logger -> new pr
refactor schemas and sql -> new pr

Copy link
Collaborator

@ngundotra ngundotra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ProgramHandler trait & it's related abstractions look great. Looks like it will simplify things going forward.

Not 100% sure what the account updates will be used for in this context. I'm guessing they're for supporting generalized indexers?

left some comments saying its cool to drop / delete / move the batch_insert/write code

nft_ingester/src/main.rs Outdated Show resolved Hide resolved
nft_ingester/src/parsers/gummy_roll.rs Show resolved Hide resolved
nft_ingester/src/parsers/mod.rs Show resolved Hide resolved
nft_ingester/src/utils/storage.rs Outdated Show resolved Hide resolved
nft_ingester/src/parsers/gummy_roll.rs Show resolved Hide resolved
plerkle_serialization/src/lib.rs Outdated Show resolved Hide resolved
@austbot
Copy link
Collaborator Author

austbot commented May 23, 2022

The ProgramHandler trait & it's related abstractions look great. Looks like it will simplify things going forward.

Not 100% sure what the account updates will be used for in this context. I'm guessing they're for supporting generalized indexers?

left some comments saying its cool to drop / delete / move the batch_insert/write code

since this is the foundation of the indexer for all NFTs using the digital asset rpc api we need account updates to track state of nfts

Copy link
Collaborator

@ngundotra ngundotra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good

nft_ingester/src/main.rs Outdated Show resolved Hide resolved
nft_ingester/src/utils/instructions.rs Show resolved Hide resolved
nft_ingester/src/utils/logs.rs Show resolved Hide resolved
nft_ingester/src/parsers/bubblegum.rs Show resolved Hide resolved
nft_ingester/src/parsers/gummy_roll.rs Outdated Show resolved Hide resolved
Comment on lines 19 to 23
let pool = PgPoolOptions::new()
.max_connections(5)
.connect("postgres://solana:solana@db/solana")
.await
.unwrap();
Copy link
Collaborator

@danenbm danenbm May 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So each time a manager is setup there is a new Pool with 5 connections created? I thought the idea of the sqlx Pool struct was to create it once and then just send copies of the handle around to different tasks.

From the sqlx core docs:

/// `Pool` is `Send`, `Sync` and `Clone`. It is intended to be created once at the start of your
/// application/daemon/web server/etc. and then shared with all tasks throughout the process'
/// lifetime. How best to accomplish this depends on your program architecture.

Seems like you could still just pass in a copy to service_transaction_stream:

tasks.push(service_transaction_stream(pool.clone()).await);

and then you could just pass it into the constructor and still do the other stuff in setup_manager:

let mut manager = ProgramHandlerManager::new(pool);
manager = setup_manager(manager).await;

Seems simpler than having to use arc for this use case, but I'm not adamant about this, more just curious of the need for arc for this particular object.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tech me what is best I put the poool in an arc and was considering using arc mutex to sghare it with all tasks

Copy link
Collaborator

@danenbm danenbm May 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the docs and seeing different examples, in the simplest terms we should be able to do something like:

#[tokio::main]
async fn main() {
    let pool = PgPoolOptions::new()
        .max_connections(5)
        .connect("postgres://solana:solana@db/solana")
        .await
        .unwrap();

   let mut tasks = vec![];

    for i in 0..10 {
        let pool = pool.clone();

        tasks.push(tokio::spawn(async move {
            service_something(i, &pool).await;
        }));
    }
}

So each task just gets a copy of the connection pool. It doesn't seem like we need arc or a mutex. When I read the docs it seems like just using the Pool object as-is is meant to solve those issues of reference counting or having to use a mutex to protect a connection:

It is intended to be created once at the start of your application/daemon/web server/etc. and then shared with all tasks throughout the process’ lifetime.

Cloning Pool is cheap as it is simply a reference-counted handle to the inner pool state. When the last remaining handle to the pool is dropped, the connections owned by the pool are immediately closed (also by dropping).

Connection pools facilitate reuse of connections to amortize these costs

Caveats:

  1. There used to be a bug with some of this behavior but looks like it was fixed (although potential repro 5 days ago, but not sure if usage is correct, it looks slightly different).
  2. You might be seeing the future state more than me with registering more parsers and separating account handlers from transaction handlers, etc. so maybe I'm not seeing the architectural reason to have a new pool per ProgramHandlerManager.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay so we clone, I thought clone would make 5 X n cnnections

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think dups all the connextions

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this was resolved in most recent change

nft_ingester/src/main.rs Show resolved Hide resolved
nft_ingester/src/main.rs Outdated Show resolved Hide resolved
Copy link
Collaborator

@ngundotra ngundotra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Great work!

COPY ./nft_ingester .
RUN ls -la
RUN cargo build
RUN cp -r /rust/nft_ingester/target/$MODE /rust/bin
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does $MODE refer to?

Comment on lines 19 to 23
let pool = PgPoolOptions::new()
.max_connections(5)
.connect("postgres://solana:solana@db/solana")
.await
.unwrap();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this was resolved in most recent change

@austbot austbot merged commit a2860d2 into main May 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants