"Your own personal Way-Back Machine"
ArchiveBox saves an archived copy of the websites you visit into a local browsable folder (the actual content of each site, not just the list of links). It can archive your entire browsing history, or import links from bookmarks managers, rss, text files and more.
Can import links from:
- Browser history or bookmarks (Chrome, Firefox, Safari, IE, Opera)
- RSS or plain text lists
- Shaarli, Delicious, Instapaper, Reddit Saved Posts, Wallabag, Unmark.it, and more!
Can save these things for each site:
- Browsable static HTML archive (wget)
- PDF (Chrome headless)
- Screenshot (Chrome headless)
- HTML DUMP after 2s of JS running in Chrome headless
- Git repo download (git clone)
- Media download (youtube-dl: video, audio, subtitles, including playlists)
- WARC archive (wget warc)
- Submits URL to archive.org
- Index summary pages: index.html & index.json
The archiving is additive, so you can schedule
./archive to run regularly and pull new links into the index.
All the saved content is static and indexed with JSON files, so it lives forever & is easily parseable, it requires no always-running backend.
DEMO: archive.sweeting.me 网站存档 / 爬虫
git clone https://github.com/pirate/ArchiveBox.git cd ArchiveBox ./setup # Export your bookmarks, then run the archive command to start archiving! ./archive ~/Downloads/firefox_bookmarks.html # Or to add just one page to your archive echo 'https://example.com' | ./archive
We use the Github wiki system for documentation.
You can also access the docs locally by looking in the