This is a BASH script to perform fast image downloads sourced from Google Images based on a specified search-phrase. It's a web-page scraper that can source a list of original image URLs and sent them to Wget (or cURL) to download in parallel. Optionally, it can then combine them using ImageMagick's montage into a single gallery image.
This is an expansion upon a solution provided by ShellFish and has been updated to handle Google's various page-code changes from April 2016 to the present.
$ wget -qN git.io/googliser.sh && chmod +x googliser.sh
$ curl -skLO git.io/googliser.sh && chmod +x googliser.sh
The user supplies a search-phrase and other optional parameters on the command-line.
A sub-directory with the name of this search-phrase is created below the current directory.
Google Images is queried and the results saved.
The results are parsed and all image links are extracted and saved to a URL list file. Any links for YouTube and Vimeo are removed.
The script iterates through this URL list and downloads the first [n]umber of available images. Up to 1,000 images can be requested. Up to 512 images can be downloaded in parallel (concurrently). If an image is unavailable, it's skipped and downloading continues until the required number of images have been downloaded.
googliser is fully supported on Fedora Workstation, Manjaro & Ubuntu. Debian and macOS may require some extra binaries. Please advise of any issues.
$ sudo apt install wget imagemagick
$ xcode-select --install $ ruby -e "$(curl -fsSL git.io/get-brew)" $ brew install coreutils ghostscript gnu-sed imagemagick gnu-getopt
These sample images have been scaled down for easier distribution.
$ ./googliser.sh --phrase "puppies" --title 'Puppies!' --number 25 --upper-size 100000 --gallery
$ ./googliser.sh -p "kittens" -T 'Kittens!' -n16 -SGcompact
$ ./googliser.sh -n 380 -p "cows" -u 250000 -l 10000 -SG
$ ./googliser.sh [PARAMETERS] ...
Allowable parameters are indicated with a hyphen then a single character or the long form with 2 hypens and full-text. Single character options can be concatenated. e.g.
-dDEhLNqsSz. Parameters can be specified as follows:
-p [STRING] or
The search-phrase to look for. Enclose whitespace in quotes e.g.
--phrase "small brown cows"
-a [PRESET] or
The shape of the image to download. Preset values are:
-b [INTEGER] or
Thickness of border surrounding the generated gallery image in pixels. Default is 30. Enter 0 for no border.
--colour [PRESET] or
The dominant image colour. Specify like
--colour green. Default is 'any'. Preset values are:
full(colour images only)
Put the debug log into the image sub-directory afterward. If selected, debugging output is appended to 'debug.log' in the image sub-directory. This file is always created in the temporary build directory. Great for discovering the external commands and parameters used!
Perform an exact search only. Disregard Google suggestions and loose matches. Default is to perform a loose search.
Successfully downloaded image URLs will be saved into this file (if specified). Specify this file again for future searches to ensure the same links are not reused.
A comma separated list (without spaces) of words that you want to exclude from the search.
Only download images encoded in this file format. Preset values are:
Create a thumbnail gallery.
Build the gallery image with a transparent background.
Create the gallery in condensed mode. No padding between each thumbnail. The default leaves some space between each thumbnail.
Delete the downloaded images after building the thumbnail gallery. Default is to retain these image files.
Display the complete parameter list.
Put a list of URLs in a text file then specify the file here. googliser will attempt to download the target of each URL. A Google search will not be performed. Images will downloaded into the specified output-path, or a path derived from a provided phrase or gallery title.
-i [FILE] or
Put your search phrases into a text file then specify the file here. googliser will download images matching each phrase in the file, ignoring any line starting with a
#. One phrase per line.
-l [INTEGER] or
Only download image files larger than this many bytes. Some servers do not report a byte file-size, so these will be downloaded anyway and checked afterward (unless
--skip-no-size is specified). Default is 2,000 bytes. This setting is useful for skipping files sent by servers that claim to have a JPG, but send HTML instead.
Only get image file URLs, don't download any images. Default is to compile a list of image file URLs, then download them.
-m [PRESET] or
Only download images with at least this many pixels. Preset values are:
qsvga(400 x 300)
vga(640 x 480)
svga(800 x 600)
xga(1024 x 768)
2mp(1600 x 1200)
4mp(2272 x 1704)
6mp(2816 x 2112)
8mp(3264 x 2448)
10mp(3648 x 2736)
12mp(4096 x 3072)
15mp(4480 x 3360)
20mp(5120 x 3840)
40mp(7216 x 5412)
70mp(9600 x 7200)
-n [INTEGER] or
Number of images to download. Default is 36. Maximum is 1,000.
Runtime display in bland, uncoloured text. Default will brighten your day. :)
-o [PATH] or
The output directory. If unspecified, the search phrase is used. Enclose whitespace in quotes.
-P [INTEGER] or
How many parallel image downloads? Default is 64. Maximum is 512. Use 0 for maximum.
Suppress stdout. stderr is still shown.
Download a single random image. Use
-n --number to set the size of the image pool to pick a random image from.
-R [PRESET] or
Only get images published this far back in time. Default is 'any'. Preset values are:
Downloaded image files are reindexed and renamed into a contiguous block. Note: this breaks the 1:1 relationship between URLs and downloaded file names.
-r [INTEGER] or
Number of download retries for each image. Default is 3. Maximum is 100.
Disable Google's SafeSearch content-filtering. Default is enabled.
Put the URL results file into the image sub-directory afterward. If selected, the URL list will be found in 'download.links.list' in the image sub-directory. This file is always created in the temporary build directory.
A comma separated list (without spaces) of sites or domains from which you want to search the images.
Some servers do not report a byte file-size, so this parameter will ensure these image files are not downloaded. Specifying this will speed up downloading but will generate more failures.
Specify the maximum dimensions of thumbnails used in the gallery image. Width-by-height in pixels. Default is 400x400. If also using condensed-mode
-C --condensed, this setting determines the size and shape of each thumbnail. Specify like
-t [INTEGER] or
Number of seconds before the downloader stops trying to get each image. Default is 30. Maximum is 600 (10 minutes).
-T [STRING] or
Specify a custom title for the gallery. Default is to use the search-phrase. To create a gallery with no title, specify
--title none. Enclose whitespace in single or double-quotes according to taste. e.g.
--title 'This is what cows look like!'
Image type to download. Preset values are:
-u [INTEGER] or
Only download image files smaller than this many bytes. Some servers do not report a byte file-size, so these will be downloaded anyway and checked afterward (unless
--skip-no-size is specified). Default is 200,000 bytes.
Usage rights. Preset values are:
reuse(labeled for reuse)
reuse-with-mod(labeled for reuse with modification)
noncomm-reuse(labeled for noncommercial reuse)
noncomm-reuse-with-mod(labeled for noncommercial reuse with modification)
Lightning mode! For those who really can't wait! Lightning mode downloads images even faster by using an optimized set of parameters: timeouts are reduced to 1 second, don't retry any download, skip any image when the server won't tell us how big it is, download up to 512 images at the same time, and don't create a gallery afterward.
$ ./googliser.sh -p "cows"
This will download the first 36 available images for the search-phrase "cows"
$ ./googliser.sh --number 250 --phrase "kittens" --parallel 128
This will download the first 250 available images for the search-phrase "kittens" and download up to 128 images at once.
$ ./googliser.sh --number 56 --phrase "fish" --upper-size 50000 --lower-size 2000 --debug
This will download the first 56 available images for the search-phrase "fish" but only if the image files are between 2KB and 50KB in size and write a debug file.
$ ./googliser.sh -n80 -p "storm clouds" -sG --debug
This will download the first 80 available images for the phrase "storm clouds", ensure both debug and URL links files are placed in the target directory and create a thumbnail gallery.
$ ./googliser.sh -p "flags" --exclude-words "pole,waving" --sites "wikipedia.com"
This will download available images for the phrase "flags", while excluding the images that have words pole and waving associated with them and would return the images from wikipedia.com.
0 : success!
1 : required external program unavailable.
2 : specified parameter incorrect - help shown.
3 : unable to create sub-directory for 'search-phrase'.
4 : could not get a list of search results from Google.
5 : image download ran out of images.
6 : thumbnail gallery build failed.
7 : unable to create a temporary build directory.
I wrote this script so users don't need to obtain an API key from Google to download multiple images.
To download 1,000 images, you need to be lucky enough for Google to find 1,000 results for your search term, and for those images to be available for download. I sometimes get more failed downloads than successful downloads (depending on what I'm searching for). In practice, I've never actually seen Google return 1,000 results. My best was about 986.
If identify (from ImageMagick) is installed, every downloaded file is checked to ensure that it is actually an image. Every file is renamed according to the image type determined by identify. If identify is not available, then no type-checking occurs.
Every link that cannot be downloaded, or is outside the specified byte-size range, counts as a 'failure'. A good way to see lots of failures quickly is to specify a narrow byte-size range. e.g.
--lower-size 12000 --upper-size 13000.
The failures percentage shown after download is the number of failed downloads as a percentage of the total number of image downloads attempted - this includes successful downloads. e.g. 25 images downloaded OK with 8 download failures yields a total of 33 downloads attempted. And 8 / 33 = 24%.
Only the first image of a multi-image file (like an animated GIF) will be used for its gallery image.
Usually downloads run quite fast. This comes from having an over-abundance of image links to choose from. Sometimes though, if there are a limited number of image links remaining, downloads will appear to stall as all download processes are being held-up by servers that are not responding/slow to respond or are downloading large files. If you run low on image links, all remaining downloads can end up like this. This is perfectly normal behaviour and the problem will sort itself out. Grab a coffee.
The temporary build directory is
/tmp/googliser.PID.UNIQwhere PID is shown in the title of the script when it runs and UNIQ will be any 3 random alpha-numeric characters.
This script will need to be updated from time-to-time as Google periodically change their search results page-code. The latest copy can be found here.
- Debian - 10.2 Buster 64b
- GNU BASH - v5.0.3
- GNU Wget - v1.20.1
- GNU cURL - v7.64.0
- GNU grep - v3.3
- GNU sed - v4.7
- ImageMagick - v6.9.10-23 Q16
- Geany - v1.33
- ReText - v7.0.4
- Konsole - v18.04.0
- KDE Development Platform - v5.54.0
- QT - v5.11.3
- Find Icons - script icon
and periodically tested on these platforms:
- openSUSE - LEAP 42.1 64b
- Ubuntu - 18.04.1 LTS
- macOS - 10.13 High Sierra, 10.14 Mojave, 10.15 Catalina
- Fedora - 28, 30 Workstation
- Mint - 19.1 Tessa XFCE
- Manjaro - 18.0.2 XFCE
Suggestions / comments / bug reports / advice (are|is) most welcome. :) email me