You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm seeing 3 worker nodes each slowly downloading 1 large dag each, while messages backup in the queue. I'd expect each node to be pulling additional messages from the queue up to BATCH_SIZE rather than waiting for the single item to complete.
The text was updated successfully, but these errors were encountered:
- switch to an sqs lib that polls for new messages concurrently rather
than in batches. **This is rad** as now we'll make better use of each
container!
- treat timeouts as a regular failure. Let the message go back on the
queue for another node to try. After 3 goes it'll go to the dead letter
queue and be marked as failed. This is fine, and simplifies the pickup
worker a lot, as it doesn't need to talk to dynamo or determine the
cause of an error.
- rewrite pickup worker so we can compose it out of
single-responsibility pieces instead of having to pass through the giant
config ball. _It's so much simpler now!_ You can figure our what it does
from it's parts! `sqsPoller` + `carFetcher` + `s3Uploader`
```js
const pickup = createPickup({
sqsPoller: createSqsPoller({
queueUrl: SQS_QUEUE_URL,
maxInFlight: BATCH_SIZE
}),
carFetcher: new CarFetcher({
ipfsApiUrl: IPFS_API_URL,
fetchTimeoutMs: TIMEOUT_FETCH
}),
s3Uploader: new S3Uploader({
bucket: VALIDATION_BUCKET
})
})
```
see: https://github.com/PruvoNet/squiss-ts/fixes#13fixes#116fixes#101
License: MIT
---------
Signed-off-by: Oli Evans <oli@protocol.ai>
I'm seeing 3 worker nodes each slowly downloading 1 large dag each, while messages backup in the queue. I'd expect each node to be pulling additional messages from the queue up to
BATCH_SIZE
rather than waiting for the single item to complete.The text was updated successfully, but these errors were encountered: