Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

computer vision tests for facilitating non-text-based search and exploration

playing around with some computer vision and machine learning algorithms to hopefully reduce dependency on text-based searches of the DPLA corpus.

tests done in the “world war i posters” search in DPLA which contains about 2,300 images from different sources. a list of ids and urls can be found in data/images.ndjson

that query was chosen for the visual quality of the items (ideally posters, assuming the proper metadata exists in the records) and also relevant to 2018 being the 100 year anniversary of the end of world war i.

download the thumbnails

use the data/ script to get all the thumbnails. the script requires jq

grid viewer demo

browse the posters in a grid viewer organized by visual similarity in a layout inspired by google images although this one makes use of css grid. works best on larger screens.

uses localForage for list management.

serendipity viewer demo

this viewer aims to allow serendipitous exploration of the posters. it chooses a random image from the set followed by other images from very similar to very dissimilar. it is somewhat inspired by the ios photos application, where the swipe-up gesture in an image displays related photos.

reverse image search web service

skeleton service that contains the trained model and can be used to find the similarity of any user-submitted image to the images in the set. more information is provided in that folder's README.

see also

this code borrows extensively from:


playing around with some computer vision and machine learning algorithms




No releases published


No packages published