Iwata Asks Downloader
This tool downloads the Iwata Asks series of interviews, saving as Markdown and HTML with images.
I created this tool in Spring/Summer 2019 so that I could more easily read and search the Iwata Asks interviews.
Note: This tool was developed and tested on macOS, and works on Linux, but I'm not sure how/if it works on Windows.
You can fund development of this tool, or just say thanks, through one of the following:
- Patreon: https://www.patreon.com/gingerbeardman/
- Ko-Fi: https://ko-fi.com/gingerbeardman
- PayPal: https://www.paypal.me/mattsephton
Your support is appreciated!
- None of the Iwata Asks interview content is stored here!
- The Iwata Asks interview content remains copyright of its creators.
- This tool and its output is meant for personal use only.
- Don't do anything you shouldn't do with the content.
- Watch out for the Ninjas!
- Python 3, with:
- Scrapy (
$ pip install scrapy)
- Pandoc (
$ brew install pandoc)
Note: macOS Catalina users will need to use
pip3 and add
--user to the end of each such command
- Make sure you're running Python 3 (
$ python -V)
- Run the scraper using the script as follows:
- Watch the progress bar as the process completes (approx. 25 minutes on first run)
- Output is placed in the
to_epub.shto convert the HTML files to EPUB
How does this work?
Scrapy is a framework for creating web spiders.
A web spider loads a web page and extracts content from it according to defined rules/logic/programming.
This tool uses a list of URLs for the first page of each interview (
iwata-eu.csv) to feed the scraper, whose web spider (
iwata-eu.py) extracts the content and automatically includes subsequent pages by following the original page navigation links. The main loop process is controlled by a shell script (
Currently the scraper only works on the EU series of interviews due to their static page structure being more suitable (the USA interviews use AJAX to load content). The EU list has 178 seed URLs, most of which have multiple pages, so download and processing of over 30,000 files takes quite a while the first time (approx. 25 minutes). Subsequent runs will use cached data and be much quicker (appox. 13 minutes). The final resulting output should be 178 files each of Markdown/HTML, along with 3,416 images.
The scraper parses out the following content:
- Page Title (
- Section Heading (
- Interviewer Name (
- Interviewer Text (
- Related Image (
The content from multiple pages is processed and reformatted, as Markdown and HTML, and finally saved to disk as a single file.
Note: HTML generation accounts for approx. 3 minutes of processing time.
Single ePub versions of each HTML file can be generated using the sctipt
Finally, you can combine the ePub files into one book using script: (TO DO)
|ePub||Links need to be internalised|
/iwata-eu.csv(list of seed URLs)
/iwata/settings.py(settings, including debug pipelines)
/iwata/spiders/iwata-eu.py(the most important file, the spider itself!)
- You'll see notes about command lines used to test the spider that I use in the CodeRunner app, but you should be able to use them on the command line too.
- Scrapy caches content in
/.scrapy/httpcacheso you can develop using a cache of the pages rather than wait for downloading each time.
- I recommend developing using a subset of pages and only use the full list (
iwata-eu.csv) for your final output.
I will happily accept and merge any PR that improves this tool. I wrote this as I learned about Scrapy so there is undoubtedly room for improvement. Contributions are very welcome!
- Optimisation that speed up any part of the processing
- Improvements to readable output
- Improvements to format conversion
- Adding missing interviews (each source will require a new spider)
- Improvements to
2020-01-10: Now uses accurate progress bar
2020-01-06: Added EPUB generation
2020-01-05: Public Release
2019-07-03: Support for multiple URLs
2019-06-22: Saves as Markdown and HTML
2019-04-15: Initial scraper and spider