Kaili Wang, Jose Oramas, Tinne Tuytelaars
Oral paper @ ACCV 2020.
You can find the paper here: In Defense of LSTMs for Addressing Multiple Instance Learning Problems
LSTMs have a proven track record in analyzing sequential data. But what about unordered instance bags, as found under a Multiple Instance Learning (MIL) setting? While not often used for this, we show LSTMs excell under this setting too. In addition, we show thatLSTMs are capable of indirectly capturing instance-level information us-ing only bag-level annotations. Thus, they can be used to learn instance-level models in a weakly supervised manner. Our empirical evaluation on both simplified (MNIST) and realistic (Lookbook and Histopathology) datasets shows that LSTMs are competitive with or even surpass state-of-the-art methods specially designed for handling specific MIL problems. Moreover, we show that their performance on instance-level prediction is close to that of fully-supervised methods.
To be organized and uploaded.
The code is based on Pytorch 0.4.1, Python 2. The implementation considers Attention-based Deep Multiple Instance Learning.
We provide the training and testing code for the difficult experiment Outlier Detection. We use 10,000 sets to train the model and 2,000 sets to test. The setcardinality is 6 with 1 standard deviation.