Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Require dynamic object scoped IWebCrawler.Crawl(Uri uri); #53

Closed
cjsheehan opened this issue Jan 7, 2015 · 2 comments
Closed

Require dynamic object scoped IWebCrawler.Crawl(Uri uri); #53

cjsheehan opened this issue Jan 7, 2015 · 2 comments

Comments

@cjsheehan
Copy link

It would be useful to have a dynamic object that is available only for the duration of the Crawl call e.g.

IWebCrawler.Crawl(Uri uri, dynamic localConfig)

I am currently using the CrawlBag but it's a little bit messy as I want to pass a business object to the crawler that really should only be valid for that single call to Crawl, subsequent calls will pass different localConfig objects. These objects handle the building and processing of the DOM according to my business logic and construction of extracted hiererchical data.

I can see the _crawlContext persists for the lifetime of the IWebCrawler, which is great as I need some configuration valid for the entire IWebCrawler existence i.e. multiple subsequent calls to Crawl, but I also need configuration scoped only to an individual call to Crawl which I'd imagine is best controlled as a method parameter. Let me know if there is a better way of accomplishing my tasks or if you need better info.

@sjdirect
Copy link
Owner

sjdirect commented Jan 7, 2015

Abot is designed to crawl once per instance so I would recommend only calling Crawl once. I may add a check to only allow the Crawl() method to be crawled once. Given that design focus the CrawlBag should suite your needs?

@cjsheehan
Copy link
Author

OK, I understand, its definitely worth adding the check and something to the docs.

My app is a data scraper/aggregator. It hits a single webserver (per crawler instance) for <1000 pages once per day and then the same server multiple times for 10's of pages throughout the day. Therefore it would be nice to have the crawler instance persist for the day and maintain the politeness state.

The abot architecture takes care of the crawling (scheduling, threading, requesting etc) much better than my own current app so I am keen to integrate abot. I will try and persist the poilteness state between crawler instances in my own code via the CrawlBag.

@sjdirect sjdirect closed this as completed Jan 9, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants