Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Service: Producer Feeder #45

Closed
iPromKnight opened this issue Feb 2, 2024 · 6 comments
Closed

New Service: Producer Feeder #45

iPromKnight opened this issue Feb 2, 2024 · 6 comments
Labels
enhancement New feature or request

Comments

@iPromKnight
Copy link
Collaborator

iPromKnight commented Feb 2, 2024

I've an idea for a new service, or expansion to the producer service.
We'd publish each and every incoming request for data performed against the addon with its incoming imdbId and extracted meta data - title, season, episode + year etc to the broker

These messages would be on a new queue

The producer would listen to this queue, and then utilise the sylcer / wako / weyd scrapers to perform searches for data - which when found would be pushed as ingestions into the consumer queue, essentially expanding the collection everytime someone searches for something.
We'd be able to filter the need to perform the augmentation / scraping based on the last updated datetime of an items imdbId in the database etc

Putting something together that can handle the scraping of weyd scrapers for example is pretty easy - I mean helios has what we need in it already: https://github.com/wako-unofficial-addons/helios/blob/master/projects/plugin/src/plugin/queries/torrents/torrents-from-provider-base.query.ts

Syncler providers like JakedUp could be used - like this:
express-hybrid.json

With this new service, the collection of cached information would grow organically

@iPromKnight iPromKnight added the enhancement New feature or request label Feb 2, 2024
@Gabisonfire
Copy link
Collaborator

So some kind of p2p database. I wanted to do something like this with Jackett, when you get a query result, its sent to your local cache and a global cache.
But that would require some kind of authentication so the global cache isnt spammed with garbage.

@purple-emily
Copy link
Collaborator

purple-emily commented Feb 2, 2024

@iPromKnight please correct me if my prom -> Emily translation of what you wrote is incorrect:

user on Stremio requests something. we don’t have any sources in the database. new service you are proposing adds this to a queue to be found. send new scraper out to find what is being requested.

So essentially recreating a very small version of Jackett as a fallback for the lack of years of data a user would have from the original Torrentio.

@iPromKnight
Copy link
Collaborator Author

iPromKnight commented Feb 2, 2024

Exactly that Emily ^^ - spot on.

I'm not saying this will be distributed in any way - will still be your own database instance etc, but we'd be able to fallback onto all of the sources other providers use to populate the db without the scraper having to wait until something hits the rss feeds.

Due to the limitations in stremio, i.e. requests are just get requests, with no ability to post back on a webhook when the addon is searching etc, we could take the same approach torrentio does when a file is being downloaded to realdebrid - and show a video file or something stating no sources found, but an attempt to ingest will occur?

@Gabisonfire
Copy link
Collaborator

Ok thanks for the clarification. I like the fallback approach, but might as well have Stremio display the "addon still loading" until the addon can provide a response.

@iPromKnight
Copy link
Collaborator Author

In that case I'd suggest we have another service with responsibility for this, which the addon can make a call to during the search promise.

It will perform lookups directly with the syncler feeds, return the results back to the addon to be fed to stremio, and also publish them out to be stored in the db.

This way it keeps the addon cleaner responsibility wise. It's purely aggregation and filtering.

All the scrapers do is load a specific page with a set query for things like title, episode, season etc so page loads and scrapes for them should be pretty quick.

We'd have to introduce a couple of extra manifest options thinking about it. A configurable timeout to wait for the results, an option to disable the fallback lookups etc

@iPromKnight
Copy link
Collaborator Author

Emily is handling

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants