Skip to content

lookingglasslab/VisualFeatureSearch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

113 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Interactive Visual Feature Search

Devon Ulrich and Ruth Fong

This repo contains the code for our 2023 paper "Interactive Visual Feature Search".

Many visualization techniques have been created to help explain the behavior of convolutional neural networks (CNNs), but they largely consist of static diagrams that convey limited information. Interactive visualizations can provide more rich insights and allow users to more easily explore a model's behavior; however, they are typically not easily reusable and are specific to a particular model.

We introduce Interactive Visual Feature Search, a novel interactive visualization that is generalizable to any CNN and can easily be incorporated into a researcher's workflow. Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar CNN features, which can provide new insights into how a model processes the geometric and semantic details in images.

Example

Choose region Imagenet nearest neighbors
animation of highlighting widget showing the SPIA building top 2 nearest neighbors to SPIA region

Notebooks

Please see the following Colab notebooks for demos of our interactive tool:

Installing

Our tool is available as a PIP package. The following command will install it in your current environment:

pip install visualfeaturesearch

Please see the details below and the demo notebooks for more details on how to use the library.

Implementation Overview

Interactive Visual Feature Search performs similarity search between free-form regions of images. Our method for implementing this can be broken down into a few steps:

  1. Choose a model for computing feature data and a dataset to search through (e.g. ResNet50 and the ImageNet validation set).
  2. Select a convolutional layer from the model to extract features from (e.g. from ResNet50's conv5 block, with an output tensor of shape 7x7x512).
  3. Compute the feature tensors for all images in the search datset.
  4. Choose any query image and highlight a region of interest to search for (see above figure, left).
  5. Compute the feature tensor for the query image, downsample the highlighted mask to be the same size as the feature data (e.g. 7x7), and multiply the feature tensor by the mask element-wise to obtain the query features.
  6. Compute the k-nearest neighbors between the query features and all regions of the same size from the dataset features via cosine similarity.
  7. Display the most similar images & corresponding regions within them (see above figure, right).

This library is designed to make it simple to use Interactive Visual Feature Search on a local laptop/desktop or on a cloud-based notebook environment such as Google Colab.

Library Details

To edit the above notebooks or create your own visualizations with Interactive Visual Feature Search, the following modules are necessary:

  • widgets.py: the HighlightWidget and MultiHighlightWidget classes create interactive widgets that can be used in Jupyter/Colab notebooks to select an image and highlight a region within it with the mouse.
  • searchtool.py: the CachedSearchTool computes the cosine similarities between the query image and the searchable dataset in a region-based manner as described above.
  • caching.py: the above notebook demos use precomputed feature data to speed up the runtime of Interactive Visual Feature Search. To create your own feature caches for custom experiments, precompute() can be used to save a Zarr archive that can be used by CachedSearchTool.

Reference

If you find this visualization useful, please cite it as follows:

@inproceedings{
ulrich2023interactive,
title={Interactive Visual Feature Search},
author={Devon Ulrich and Ruth Fong},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=JqfN8vp1ov}
}

Acknowledgements

This visualization arose from discussions with David Bau and his initial prototype of a similar visualization.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •