-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvements for spider management #37
Comments
Actually I think that it's possible to get the stats from the DataStorage: e.g. https://github.com/oltarasenko/crawly/blob/master/lib/crawly/manager.ex#L83 |
I see, the crawl speed seems to be calculated based on the previous state's crawl count, so a separate callback would be necessary to obtain the crawl speed from the manager. as for the drop count, there manager doesn't seem to be tracking it. Neither are the request/response workers: |
Tentative scope:
|
@Ziinc I am thinking of making a major release of Crawly (aka v1.0.0). I think after a year of development and releases it's time to do that (I am seeing that our competitors: https://github.com/fredwu/crawler and https://github.com/Anonyfox/elixir-scrape have already reached their stable state and version, so I am tempted to do the same. Saying this I would add that I think this is the last ticket that could summarize the 1.0.0 version of Crawly. |
After 0.11.0? I think you should only bump the major version when the api scope has stabilized. Right now there are still quite a few areas that are incomplete and may result in api changes. Not much use comparing to other projects, as they have been around longer. |
@Ziinc yes we need to aim to get 1.0.0 release. It's a bit hard to push Crawly into production for larger products atm. The fact that we don't have a first stable major release hints that the framework is still in the testing stage. People are constantly saying that it's not stable. I agree regarding the API stability. We need to achieve it, however it looks like, psychologically speaking we need to state that we have 1.0.0 aka stable version. Probably we need to somehow define a scope of things to do before we can approach 1.0.0, however, it's even more important to get more production usages. If we fail to convince people to use crawly on production, we will die as a project :( |
Currently, there the
Crawly.Engine
apis are lacking for spider monitoring and management, especially for when there is no access to logs.I think some critical areas are:
stop_all_spiders
to stop all running spidersThe stopping of spiders should be easy to implement.
For the spider stats, since some of the data is nested quite deep in the supervision tree, i'm not so sure how to get it to "bubble up" to the
Crawly.Engine
level.@oltarasenko thoughts?
The text was updated successfully, but these errors were encountered: