Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Read from a Farcaster Hub instead of the Warpcast API #15

Draft
wants to merge 146 commits into
base: main
Choose a base branch
from
Draft

Conversation

gskril
Copy link
Owner

@gskril gskril commented Mar 11, 2023

Resources:

@gskril gskril marked this pull request as draft March 11, 2023 18:10
@gskril gskril changed the title Read from Farcaster Hub instead of Warpcast API Read from a Farcaster Hub instead of the Warpcast API Mar 11, 2023
gskril and others added 30 commits May 3, 2024 23:16
* signers

* Add storage and registrations

* Refactor code to use MAX_PAGE_SIZE constant

* Refactor getOnChainEventsByFidInBatchesOf function

* Update page size constant to use MAX_PAGE_SIZE

* Minor cleanup

* Remove unused import

* Patches

* Patch date logic

* Fix fid registration time, delete unused table

---------

Co-authored-by: Greg Skriloff <35093316+gskril@users.noreply.github.com>
Co-authored-by: Greg <35093316+gskril@users.noreply.github.com>
Redis consumes a ton of memory with this method, but is too slow with just storing event ids
Batches caused Redis memory issues. This change will put more stress on Postgres, so we should look into batching on the worker side https://docs.bullmq.io/bullmq-pro/batches
It's faster to first request cast hashes from the `casts` table, then follow up to get full cast metadata from `casts_enhanced`
It's faster to call the function directly for some reason
Encoding is to try and improve Redis memory consumption. Batch size is so we can more easily see which FIDs are throwing errors during backfill, if any
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants