Skip to content
Demo source code for the paper "Equivariant Multi-View Networks".
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
animations Combine and compress animations Apr 1, 2019
LICENSE Initial commit Mar 28, 2019
README.org Add arxiv link Apr 2, 2019

README.org

Equivariant Multi-View Networks

animations/combined.gif

Abstract

Several popular approaches to 3D vision tasks process multiple views of the input independently with deep neural networks pre-trained on natural images, achieving view permutation invariance through a single round of pooling over all views. We argue that this operation discards important information and leads to subpar global descriptors. In this paper, we propose a group convolutional approach to multiple view aggregation where convolutions are performed over a discrete subgroup of the rotation group, enabling, thus, joint reasoning over all views in an equivariant (instead of invariant) fashion, up to the very last layer. We further develop this idea to operate on smaller discrete homogeneous spaces of the rotation group, where a polar view representation is used to maintain equivariance with only a fraction of the number of input views. We set the new state of the art in several large scale 3D shape retrieval tasks, and show additional applications to panoramic scene classification.

Demo

Coming soon!

Reference

Carlos Esteves*, Yinshuang Xu*, Christine Allen-Blanchette, Kostas Daniilidis. Equivariant Multi-View Networks. http://arxiv.org/abs/1904.00993

@article{esteves_xu_19_equiv_multi_view_networ,
  author = {Esteves, Carlos and Xu, Yinshuang and Allen-Blanchette, Christine and Daniilidis, Kostas},
  title = {Equivariant Multi-View Networks},
  journal = {CoRR},
  year = {2019},
  url = {http://arxiv.org/abs/1904.00993},
  archivePrefix = {arXiv},
  eprint = {1904.00993},
  primaryClass = {cs.CV},
}

Authors

Carlos Esteves*, Yinshuang Xu*, Christine Allen-Blanchette, Kostas Daniilidis

GRASP Laboratory, University of Pennsylvania

You can’t perform that action at this time.