This is an official repository of the paper, Toward Asymptotic Optimality: Sequential Unsupervised Regression of Density Ratio for Early Classification. Tensorflow implementations of the two proposed models, B2Bsqrt-TANDEM and TANDEMformer, are in the repo. We also list the detailed experimental setups used to generate the results.
Conventional Sequential density ratio estimation (SDRE) algorithms can fail to estimate DRs precisely due to the internal overnormalization problem, which prevents the DR-based sequential algorithm, Sequential Probability Ratio Test (SPRT), from reaching its asymptotic Bayes optimality. We formulate this DR, or equivalently, log-likelihood ratio (LLR) estimation problem, as the log likelihood ratio (LLR) saturation problem and solved it with highly effective yet simple algorithms, B2Bsqrt-TANDEM and TANDEMformer. They prevent the problem source, overnormalization, for precise unsupervised regression of the LLRs, providing an essential step toward the asymptotic optimality.
- Python 3.8
- Tensorflow 2.8.0
- CUDA 11.6.1
- cuDNN 8.3.3.40
This repo contains the code of the two temporal integrators (TIs) for SDRE:
- B2Bsqrt_TANDEM.py
- TANDEMformer.py
See the conceptual figure below and the original paper for a detailed description of the models. The code is based on Tensorflow and readily be replaced with a conventional SDRE model. Both models take a tensor with shape (batch size, effective duration
All the outputs are used to compute the multiplet cross-entropy loss if applicable, while the last two feature vectors out of the
The parameters in the table are fixed and used in all the models in order to have a fair comparison of our proposed models and baselines. All other hyperparameters are independently optimized with the Optuna framework.
Parameter | Sequential Gaussian | SiW | UCF101 | HMDB51 |
---|---|---|---|---|
LSTM dim. | 64 | 256 | 256 | 256 |
Markov order | 49 | 10 | 10 | 10 |
Feature dim |
128 | 512 | 2048 | 2048 |
Batch size | 100 | 83 | 31 | 25 |
The table below summarizes the hyperparameter search space used in the early classification experiments on real datasets (SiW, UCF101, and HMDB51). Note that the number of Transformer blocks, head size, number of attention heads, Feedforward dim., and MLP units are Transformer-specific parameters.
Parameter | Search space |
---|---|
Weight decay | {0.0, 0.00001, 0.0001, 0.001} |
Learning rate | {0.0001, 0.001, 0.01} |
Dropout | {0.0, 0.1, 0.2, 0.3, 0.4} |
Optimizer | {Adam, RMSprop} |
LLRe loss ratio | {0.4, 0.5, 0.6, 0.6, 0.7, 0.8, 0.9, 1.0} |
Number of Transformer blocks | {1, 2} |
Head size | {8, 16, 32} |
Number of attention heads | {1, 2, 3} |
Feedforward dim. | {8, 16, 32} |
MLP units | {8, 16, 32} |