computer vision tests for facilitating non-text-based search and exploration
playing around with some computer vision and machine learning algorithms to hopefully reduce dependency on text-based searches of the DPLA corpus.
that query was chosen for the visual quality of the items (ideally posters, assuming the proper metadata exists in the records) and also relevant to 2018 being the 100 year anniversary of the end of world war i.
download the thumbnails
browse the posters in a grid viewer organized by visual similarity in a layout inspired by google images although this one makes use of css grid. works best on larger screens.
uses localForage for list management.
this viewer aims to allow serendipitous exploration of the posters. it chooses a random image from the set followed by other images from very similar to very dissimilar. it is somewhat inspired by the ios photos application, where the swipe-up gesture in an image displays related photos.
skeleton service that contains the trained model and can be used to find the similarity of any user-submitted image to the images in the set. more information is provided in that folder's README.
this code borrows extensively from: