-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use batch processor for balances #129
base: feature/firesquid
Are you sure you want to change the base?
Conversation
prawns/balances/src/processor.ts
Outdated
await ctx.store.save(mergeAccounts(models.accounts, existingAccounts)); | ||
|
||
await Promise.all([ | ||
ctx.store.save(models.transfers), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think save takes an array... so I think you can do a single save with all models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is an array, but I was a bit nervous about sending an array of different model types! I'll give it a go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It definitely worked pre-firesquid as I did it in governance somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, I've left accounts above as it's essential they go in first
): SubstrateBalanceAccount { | ||
// existing in this batch only... db check & merge done at the end | ||
const existingAccountIndex = models.findIndex((acc) => acc.id === id); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bit picky... but feel like this would be easier to follow if the condition between creating a model and changing it were separated. Something like
let model = models[existingAccountIndex];
if (!model) {
model = new SubstrateBalanceAccount({
id,
publicKey,
network,
symbol,
decimals,
firstBalanceChangeEventDate: date,
firstBalanceChangeEventBlockNumber: blockNumber,
lastBalanceChangeEventDate: date,
lastBalanceChangeEventBlockNumber: blockNumber,
totalTransfers: 0,
totalBalanceChangeEvents: 0,
});
models.push(model);
}
model.lastBalanceChangeEventDate = date;
model.lastBalanceChangeEventBlockNumber =
blockNumber;
model.totalBalanceChangeEvents += 1;
if (isTransfer) {
model.totalTransfers += 1;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's easier to read, updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👏
async function processEvent(params: EventProcessorParams) { | ||
switch (params.item.name) { | ||
case 'Balances.Transfer': { | ||
const { from, to, amount } = getBalancesTransferEvent( | ||
params.ctx, | ||
params.item.event, | ||
network | ||
); | ||
handleBalanceChange({ | ||
eventType: SubstrateBalanceChangeEventType.BalancesTransfer, | ||
processorParams: params, | ||
eventParams: { | ||
amount, | ||
account: to, | ||
from, | ||
}, | ||
}); | ||
break; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wish there was a better solution for this switch statement :D But don't have a better one
The batch processor looks good:
Without it (fire squid still though):
28.7mins
1723.42secs
With it:
1.31mins
78.79secs
Data processed:
Network: polkadot
Squid: balances
Start block: 10,000,000
End block: 10,100,100
According to this we can index all of balances for a network in a couple of hours. I think the bottle neck is DB write speed, but it's plenty fast enough.