Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Save crawled documents #1

Open
mhauri opened this issue Feb 17, 2017 · 1 comment
Open

Feature Request: Save crawled documents #1

mhauri opened this issue Feb 17, 2017 · 1 comment

Comments

@mhauri
Copy link

mhauri commented Feb 17, 2017

This is not an issue, it's a feature request.
For several reasons it would be great to have an option to save the crawled Sites as html locally.
Not sure if this is easy to implement, but I would really like to have such an option.

Keep up the great work.

andreaskoch pushed a commit that referenced this issue Feb 17, 2017
@andreaskoch
Copy link
Owner

Thanks for the suggestion.
I thought about that as well. I added it to the roadmap (see: c7b6281).
Maybe I will combine that with a change that will also download all images, style sheets and JavaScript files. Currently gargantua only downloads all href-links it finds on a page.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants