Daemon.jsonrpc_file_read: read claims from a file and download them #3423
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This follows after #3422.
The idea with #3422 is to produce a file with a list of claims. With this pull request we take that written file, parse it to get the claim IDs, and then download each of the streams. The file is a comma-separated values (CSV) file, although by default we use the semicolon
;
as separator.Basically, the idea is that we can share lists of claims to other users of the LBRY network, and they can import these lists into their own computers (through
lbrynet
or the LBRY Desktop application) so that they can download the same claims that we have, and thus help seed the same content that we are seeding.This is a prototype implementation; it works when the number of claims is relatively small; however, once the number of claims is large, more than 500 or so, the
Daemon.jsonrpc_file_read
method will time out, so it won't finish processing the list. I'm not sure what can be done to make sure it processes a big list without timeouts.The obvious solution is to not implement this in the SDK itself, but parse the file, and call
lbrynet get
on each of the claims.Then each call to
get
will be separate from each other, each will have its own timeout.Also, since the file is meant to contain the
'claim_id'
,get
should be able to handle claim IDs, as proposed in #3411.