-
-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Local Scraper (Use browser auth) #172
Comments
Yes, taking screenshots is possible with chrome extensions. |
This makes a lot of sense. The extension itself can capture the page content so that hoarder doesn't need to crawl it. This is a reasonable feature request, will add it to our todo list :) |
This would be a great feature. Also tubearchivist has a browser extension that syncs your youtube cookies with the tube archivist server. An extension that automatically shares all your cookies, or let's you choose which cookies to share, or sends the cookies of current page to hoarder before it starts scraping might be an option. |
A similar solution to Evernote Web clipper would be awesome. Select some text/images -> right click -> hoard. https://chromewebstore.google.com/detail/evernote-web-clipper/pioclpoplcdbaefihamjohnefbikjilc?hl=en |
^ web-clipper does exist and work. I didn't like the flow too much. SingleFile is another project to check. It outputs .html (or an archive). I think it's easier to manage and quick to run. It has an |
There are a number of sites that I visit that I would like to bookmark that require authentication. A good example of this is news sites with content behind paywalls.
Using a scraper on a server won't work and will instead just save a login screen.
Can the browser extensions pass a copy of the page via the API and save that?
The text was updated successfully, but these errors were encountered: