-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to fetch the latest block #837
Comments
Thanks for opening an issue. Could you try writing a similar script to the one here, but using viem (https://viem.sh)? Ponder uses viem internally to communicate with RPCs. If the request works with viem but not Ponder, it would be a clue. |
Testing with viem script: // 1. Import modules.
import { createPublicClient, http } from 'viem'
import { mainnet } from 'viem/chains'
const client = createPublicClient({
chain: mainnet,
transport: http('http://127.0.0.1:9944'),
})
async function main() {
for (let i = 0; i < 100; i++) {
const block = await client.request({
method: 'eth_getBlockByNumber',
params: ['latest', true],
})
console.log("fetched block: ", block.number)
await sleep(500);
}
}
function sleep(ms) {
return new Promise((resolve) => {
setTimeout(resolve, ms);
});
}
await main(); The output: fetched block: 0x271777
fetched block: 0x271779
fetched block: 0x271779
fetched block: 0x27177a
fetched block: 0x27177a It also works with viem. Or should I tweak the script to test it again? |
Thanks. Could you share the "network" configuration you're using in |
We are also seeing this issue as well in 0.4.9, and it seems to be preventing the Ponder indexer from becoming healthy:
|
const MAX_REQUESTS_PER_SECOND = 6;
export default createConfig({
networks: {
darwinia: {
chainId: 46,
// transport: http("http://c1.darwinia-rpc.itering.io:9944/"),
transport: http("http://127.0.0.1:9944"),
maxRequestsPerSecond: MAX_REQUESTS_PER_SECOND,
},
}, To make things clear, I have omitted other networks. |
@boundless-forest thanks for the config. the server you provided in the config appears to be offline though. based on your initial error message it looks like your server took too long to respond and our transport killed the request, causing the |
Sorry for the inconvenience. The service
Extract! I also guess this is the reason. But I'm confused why calling the same rpc method, ponder will trigger this error while using the test script above will not. Is there any limit or config in the ponder to monitor the request time? such as if the request doesn't get a server response in xxx seconds, it will be killed. If there is, I can tweak the value and try again. |
@boundless-forest Yes. It looks like your using the type HttpTransportConfig = {
/**
* Whether to enable Batch JSON-RPC.
* @link https://www.jsonrpc.org/specification#batch
*/
batch?: boolean | BatchOptions
/**
* Request configuration to pass to `fetch`.
* @link https://developer.mozilla.org/en-US/docs/Web/API/fetch
*/
fetchOptions?: HttpOptions['fetchOptions']
/** The key of the HTTP transport. */
key?: TransportConfig['key']
/** The name of the HTTP transport. */
name?: TransportConfig['name']
/** The max number of times to retry. */
retryCount?: TransportConfig['retryCount']
/** The base delay (in ms) between retries. */
retryDelay?: TransportConfig['retryDelay']
/** The timeout (in ms) for the HTTP request. Default: 10_000 */
timeout?: TransportConfig['timeout']
} So you can try changing your network config: const TIMEOUT_MS = 60_000;
export default createConfig({
networks: {
darwinia: {
chainId: 46,
// transport: http("http://c1.darwinia-rpc.itering.io:9944/"),
transport: http("http://127.0.0.1:9944", {timeout: TIMEOUT_MS}),
maxRequestsPerSecond: MAX_REQUESTS_PER_SECOND,
},
}, |
I am using ponder to index the evm logs based on our substrate ethereum compatible chain. However, I have ecountered an issue that fails to fetch the latest block.
The ponder logs:
The first idea that comes to my mind is that the node rpc is down, no response to the
eth_getBlockByNumber
request. So, I write a script to test it by call the rpc method directly, the result shows that the rpc method is alive.The only log that is worth noting in the RPC server side is:
I have no idea what caused it. I would appreciate it if anyone has any insight into this?
The text was updated successfully, but these errors were encountered: