Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements for spider management #37

Closed
Ziinc opened this issue Dec 11, 2019 · 6 comments
Closed

Improvements for spider management #37

Ziinc opened this issue Dec 11, 2019 · 6 comments
Assignees
Labels
good first issue Good for newcomers
Milestone

Comments

@Ziinc
Copy link
Collaborator

Ziinc commented Dec 11, 2019

Currently, there the Crawly.Engine apis are lacking for spider monitoring and management, especially for when there is no access to logs.

I think some critical areas are:

  • spider crawl stats (scraped item count, dropped request/item count, scrape speed)
  • stop_all_spiders to stop all running spiders

The stopping of spiders should be easy to implement.

For the spider stats, since some of the data is nested quite deep in the supervision tree, i'm not so sure how to get it to "bubble up" to the Crawly.Engine level.

@oltarasenko thoughts?

@oltarasenko
Copy link
Collaborator

Actually I think that it's possible to get the stats from the DataStorage: e.g. Crawly.DataStorage.stats(spider_name).

https://github.com/oltarasenko/crawly/blob/master/lib/crawly/manager.ex#L83

@Ziinc
Copy link
Collaborator Author

Ziinc commented Dec 16, 2019

I see, the crawl speed seems to be calculated based on the previous state's crawl count, so a separate callback would be necessary to obtain the crawl speed from the manager.

as for the drop count, there manager doesn't seem to be tracking it. Neither are the request/response workers:

@Ziinc
Copy link
Collaborator Author

Ziinc commented May 16, 2020

Tentative scope:

  • start spider (implemented)
  • stop spider (implemented)
  • start all spiders
  • stop all spiders
  • spider stats (crawl count, overridden settings, request count, storage count, crawl speed, drop count)
  • list all spiders
  • schedule spider to start at specific time (maybe cron style scheduling?)

@Ziinc Ziinc self-assigned this May 16, 2020
@oltarasenko
Copy link
Collaborator

@Ziinc I am thinking of making a major release of Crawly (aka v1.0.0). I think after a year of development and releases it's time to do that (I am seeing that our competitors: https://github.com/fredwu/crawler and https://github.com/Anonyfox/elixir-scrape have already reached their stable state and version, so I am tempted to do the same.

Saying this I would add that I think this is the last ticket that could summarize the 1.0.0 version of Crawly.

@Ziinc
Copy link
Collaborator Author

Ziinc commented May 18, 2020

After 0.11.0? I think you should only bump the major version when the api scope has stabilized. Right now there are still quite a few areas that are incomplete and may result in api changes.

Not much use comparing to other projects, as they have been around longer.

@oltarasenko
Copy link
Collaborator

@Ziinc yes we need to aim to get 1.0.0 release. It's a bit hard to push Crawly into production for larger products atm. The fact that we don't have a first stable major release hints that the framework is still in the testing stage. People are constantly saying that it's not stable.

I agree regarding the API stability. We need to achieve it, however it looks like, psychologically speaking we need to state that we have 1.0.0 aka stable version.

Probably we need to somehow define a scope of things to do before we can approach 1.0.0, however, it's even more important to get more production usages. If we fail to convince people to use crawly on production, we will die as a project :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants