You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It will be a good addition to not have to re-scrape same webpage over and over, as it is wasteful etc.
Scraped pages should persist perhaps in a system-wide context such that subsequent calls to /web http://already-scraped.com/specific-page will only re-scrape if not already scraped or if user specifically asks to, perhaps a switch to be provided to the /web command.
Many thanks for the awesome job!
The text was updated successfully, but these errors were encountered:
Re-scraping a webpage should only take a moment, and ensures you have a fresh copy of the data it contains. Persisting or caching the content could lead to problems with not picking up new page content.
Can you help me understand the problem you are having with re-scraping?
Agreed. Although "should only take a moment" when done multiple times a day adds up, esp if not on good connection.
My specific use-case is when I need to use specific features of tools/frameworks like Laravel or Filament. I find myself needing to re-scrape in order to provide context to tasks.
This is in furtherance of #400
It will be a good addition to not have to re-scrape same webpage over and over, as it is wasteful etc.
Scraped pages should persist perhaps in a system-wide context such that subsequent calls to
/web http://already-scraped.com/specific-page
will only re-scrape if not already scraped or if user specifically asks to, perhaps a switch to be provided to the/web
command.Many thanks for the awesome job!
The text was updated successfully, but these errors were encountered: