Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client: Snap Sync Scenario Improvements #3200

Merged
merged 18 commits into from Feb 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
18 changes: 18 additions & 0 deletions packages/client/src/service/fullethereumservice.ts
Expand Up @@ -191,6 +191,24 @@
// open snapsync instead of execution if instantiated
// it will open execution when done (or if doesn't need to snap sync)
if (this.snapsync !== undefined) {
// set up execution vm to avoid undefined error in syncWithPeer when vm is being passed to accountfetcher
if (this.execution.config.execCommon.gteHardfork(Hardfork.Prague)) {
if (!this.execution.config.statelessVerkle) {
throw Error(`Currently stateful verkle execution not supported`)
}
this.execution.config.logger.info(
`Skipping VM verkle statemanager genesis hardfork=${this.execution.hardfork}`
)
await this.execution.setupVerkleVM()
this.execution.vm = this.execution.verkleVM!
} else {

Check warning on line 204 in packages/client/src/service/fullethereumservice.ts

View check run for this annotation

Codecov / codecov/patch

packages/client/src/service/fullethereumservice.ts#L194-L204

Added lines #L194 - L204 were not covered by tests
this.execution.config.logger.info(
`Initializing VM merkle statemanager genesis hardfork=${this.execution.hardfork}`
)

Check warning on line 207 in packages/client/src/service/fullethereumservice.ts

View check run for this annotation

Codecov / codecov/patch

packages/client/src/service/fullethereumservice.ts#L207

Added line #L207 was not covered by tests
await this.execution.setupMerkleVM()
this.execution.vm = this.execution.merkleVM!
}

await this.snapsync.open()
} else {
await this.execution.open()
Expand Down
1 change: 1 addition & 0 deletions packages/client/src/sync/fetcher/accountfetcher.ts
Expand Up @@ -377,6 +377,7 @@ export class AccountFetcher extends Fetcher<JobTask, AccountData[], AccountData>

if (this.highestKnownHash && compareBytes(limit, this.highestKnownHash) < 0) {
// skip this job and don't rerequest it if it's limit is lower than the highest known key hash
this.debug(`skipping request with limit lower than highest known hash`)
return Object.assign([], [{ skipped: true }], { completed: true })
}

Expand Down
78 changes: 55 additions & 23 deletions packages/client/src/sync/fetcher/storagefetcher.ts
Expand Up @@ -65,6 +65,7 @@

export type JobTask = {
storageRequests: StorageRequest[]
multi: boolean
}

export class StorageFetcher extends Fetcher<JobTask, StorageData[][], StorageData[]> {
Expand Down Expand Up @@ -151,26 +152,35 @@
* @returns origin of job is set using either @first property of fetcher or latest hash of partial job
*/
private getOrigin(job: Job<JobTask, StorageData[][], StorageData[]>): Uint8Array {
const { task, partialResult } = job
if (task.storageRequests.length > 1 || task.storageRequests[0].first === BIGINT_0) {
// peer does not respect origin or limit for multi-account storage fetch
return setLengthLeft(bigIntToBytes(BIGINT_0), 32)
}
const { first } = task.storageRequests[0]!
let origin = undefined
if (partialResult) {
const lastSlotArray = partialResult[partialResult.length - 1]
const lastSlot = lastSlotArray[lastSlotArray.length - 1]
origin = bigIntToBytes(bytesToBigInt(lastSlot.hash) + BIGINT_1)
} else {
origin = bigIntToBytes(first + BIGINT_1)
let origin: Uint8Array

// this try block contains code that is currently susceptible to typing issues and resulting errors
try {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add a comment to code which particular error scenario motivated you to put this in try/catch

Copy link
Contributor Author

@scorbajio scorbajio Feb 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parsing of partialResult was resulting in indexing and undefined errors since lastSlot was an array in certain instances instead of a slot object. I may need to rethink some of the typing we are doing in the fetchers, that's why it was necessary to use ts-ignore. I will add a comment about it to the code.

const { task, partialResult } = job

if (task.multi === true) {
// peer does not respect origin or limit for multi-account storage fetch
return setLengthLeft(bigIntToBytes(BIGINT_0), 32)
Copy link
Contributor

@g11tech g11tech Feb 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even if peer doesn't respect the origin or limit, but if we provide it, does it give an issue (just curious)

Copy link
Contributor Author

@scorbajio scorbajio Feb 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like origin/limit is still used while servicing the request, but it's a bit open ended (see https://github.com/ethereum/go-ethereum/blob/a193bb0c730e413db56424a084cc172892c68dd5/eth/protocols/snap/handler.go#L346). I just use the max range if it's a multi-account request to be safe.

}
const { first } = task.storageRequests[0]!
if (partialResult) {
const lastSlotArray = partialResult[partialResult.length - 1]

Check warning on line 167 in packages/client/src/sync/fetcher/storagefetcher.ts

View check run for this annotation

Codecov / codecov/patch

packages/client/src/sync/fetcher/storagefetcher.ts#L163-L167

Added lines #L163 - L167 were not covered by tests
const lastSlot = lastSlotArray[lastSlotArray.length - 1]
// @ts-ignore
origin = bigIntToBytes(bytesToBigInt(lastSlot[lastSlot.length - 1].hash) + BIGINT_1)
} else {
origin = bigIntToBytes(first + BIGINT_1)
}
return setLengthLeft(origin, 32)
} catch (e: any) {
this.debug(e)
}
return setLengthLeft(origin, 32)
return new Uint8Array(0)
}

private getLimit(job: Job<JobTask, StorageData[][], StorageData[]>): Uint8Array {
const { task } = job
if (task.storageRequests.length > 1) {
if (task.multi === true) {
// peer does not respect origin or limit for multi-account storage fetch
return setLengthLeft(bigIntToBytes(TOTAL_RANGE_END), 32)
}
Expand Down Expand Up @@ -218,16 +228,18 @@
this.debug(
`requested account hashes: ${task.storageRequests.map((req) => bytesToHex(req.accountHash))}`
)
this.debug(`request is multi: ${job.task.multi}`)

// only single account requests need their highest known hash tracked since multiaccount requests
// are guaranteed to not have any known hashes until they have been filled and switched over to a
// fragmented request
if (task.storageRequests.length === 1) {
if (task.multi === false) {
const highestKnownHash = this.accountToHighestKnownHash.get(
bytesToHex(task.storageRequests[0].accountHash)
)
if (highestKnownHash && compareBytes(limit, highestKnownHash) < 0) {
// skip this job and don't rerequest it if it's limit is lower than the highest known key hash
this.debug(`skipping request with limit lower than highest known hash`)
return Object.assign([], [{ skipped: true }], { completed: true })
}
}
Expand All @@ -243,7 +255,11 @@
// Reject the response if the hash sets and slot sets don't match
if (rangeResult === undefined || task.storageRequests.length < rangeResult.slots.length) {
this.debug(
`Slot set is larger than hash set: slotset ${rangeResult?.slots.length} hashset ${task.storageRequests.length} `
`Slot set is larger than hash set: slotset ${
rangeResult?.slots !== undefined ? rangeResult.slots.length : 0

Check warning on line 259 in packages/client/src/sync/fetcher/storagefetcher.ts

View check run for this annotation

Codecov / codecov/patch

packages/client/src/sync/fetcher/storagefetcher.ts#L258-L259

Added lines #L258 - L259 were not covered by tests
} hashset ${task.storageRequests.length} proofset ${
rangeResult?.proof !== undefined ? rangeResult.proof.length : 0
} `
)
return undefined
}
Expand Down Expand Up @@ -320,7 +336,7 @@

// single account requests should check if task range is satisfied since origin and limit
// are being respected
if (task.storageRequests.length === 1 && !(task.storageRequests[0].first === BIGINT_0)) {
if (task.multi === false) {
if (!hasRightElement) {
// all data has been fetched for account storage trie
completed = true
Expand Down Expand Up @@ -383,7 +399,7 @@
const highestReceivedhash = accountSlots[accountSlots.length - 1].hash
let updateHighestReceivedHash = false
const request = job.task.storageRequests[0]
if (request.first > BIGINT_0) {
if (job.task.multi === false) {
updateHighestReceivedHash = true
}

Expand All @@ -400,7 +416,12 @@
this.accountToHighestKnownHash.delete(bytesToHex(request.accountHash))
}

return Object.assign([], fullResult, { requests: job.task.storageRequests })
return Object.assign(
[],
fullResult,
{ requests: job.task.storageRequests },
{ multi: job.task.multi }
)
} else {
if (updateHighestReceivedHash && highestReceivedhash !== undefined) {
this.accountToHighestKnownHash.set(bytesToHex(request.accountHash), highestReceivedhash)
Expand All @@ -415,7 +436,9 @@
* Store fetch result. Resolves once store operation is complete.
* @param result fetch result
*/
async store(result: StorageData[][] & { requests: StorageRequest[] }): Promise<void> {
async store(
result: StorageData[][] & { requests: StorageRequest[] } & { multi: boolean }
): Promise<void> {
try {
if (JSON.stringify(result[0]) === JSON.stringify({ skipped: true })) {
// return without storing to skip this task
Expand All @@ -430,7 +453,12 @@
let slotCount = 0
const storagePromises: Promise<unknown>[] = []
result[0].map((slotArray, i) => {
const accountHash = result.requests[i].accountHash
let accountHash
if (result.multi) {
accountHash = result.requests[i].accountHash
} else {
accountHash = result.requests[0].accountHash
}

Check warning on line 461 in packages/client/src/sync/fetcher/storagefetcher.ts

View check run for this annotation

Codecov / codecov/patch

packages/client/src/sync/fetcher/storagefetcher.ts#L456-L461

Added lines #L456 - L461 were not covered by tests
const storageTrie = this.stateManager['_getStorageTrie'](accountHash)
for (const slot of slotArray as any) {
slotCount++
Expand Down Expand Up @@ -494,6 +522,7 @@
)
tasks.unshift({
storageRequests: this.storageRequests, // TODO limit max number of accounts per single fetch request
multi: true,

Check warning on line 525 in packages/client/src/sync/fetcher/storagefetcher.ts

View check run for this annotation

Codecov / codecov/patch

packages/client/src/sync/fetcher/storagefetcher.ts#L525

Added line #L525 was not covered by tests
})
this.storageRequests = [] // greedilly request as many account slots by requesting all known ones
return tasks
Expand Down Expand Up @@ -522,6 +551,7 @@
],
first: myFirst,
count: max,
multi: false,
}
tasks.push(task)
myFirst += BigInt(max)
Expand All @@ -540,6 +570,7 @@
],
first: myFirst,
count: myCount,
multi: false,

Check warning on line 573 in packages/client/src/sync/fetcher/storagefetcher.ts

View check run for this annotation

Codecov / codecov/patch

packages/client/src/sync/fetcher/storagefetcher.ts#L573

Added line #L573 was not covered by tests
}
tasks.push(task)
myFirst += BigInt(myCount)
Expand Down Expand Up @@ -587,12 +618,13 @@
let fullJob = undefined
if (this.storageRequests.length > 0) {
fullJob = {
task: { storageRequests: this.storageRequests },
task: { storageRequests: this.storageRequests, multi: true },
} as Job<JobTask, StorageData[][], StorageData[]>
} else if (this.fragmentedRequests.length > 0) {
fullJob = {
task: {
storageRequests: [this.fragmentedRequests[0]],
multi: false,
},
} as Job<JobTask, StorageData[][], StorageData[]>
} else {
Expand Down
5 changes: 3 additions & 2 deletions packages/client/test/sync/fetcher/storagefetcher.spec.ts
Expand Up @@ -159,6 +159,7 @@ describe('[StorageFetcher]', async () => {
count: BigInt(2) ** BigInt(256) - BigInt(1),
},
],
multi: false,
}
;(fetcher as any).running = true
fetcher.enqueueTask(task)
Expand Down Expand Up @@ -280,11 +281,11 @@ describe('[StorageFetcher]', async () => {
accounts: [
hexToBytes('0x00009e5969eba9656d7e4dad5b0596241deb87c29bbab71c23b602c2b88a7276'),
],
origin: hexToBytes('0x0000000000000000000000000000000000000000000000000000000000000000'),
origin: hexToBytes('0x0000000000000000000000000000000000000000000000000000000000000001'),
limit: hexToBytes('0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff'),
bytes: BigInt(50000),
}
if (JSON.stringify(input) !== JSON.stringify(expected)) throw Error('input not as expected')
assert.deepEqual(input, expected, 'Input as expected')
})
const peer = {
snap: { getStorageRanges: mockedGetStorageRanges },
Expand Down