Skip to content
Official Caffe implementation of Boosting Domain Adaptation by Discovering Latent Domains.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Add Github issue template to curb misuse. Nov 3, 2016
cmake cmake: rename libproto.a -> libcaffeproto.a May 15, 2017
data Merge pull request #4455 from ShaggO/spaceSupportILSVRC12MNIST Jul 14, 2016
docker Docker update to cuDNN 6 Apr 14, 2017
docs docs/debian guide: update compiler combination table May 15, 2017
examples BVLC -> BAIR Apr 14, 2017
matlab Handling destruction of empty Net objects May 4, 2017
models Modified readme Jun 2, 2018
python Merge pull request #5530 from willyd/nccl-py3 Apr 15, 2017
scripts Downgrade boost requirement from 1.55 to 1.54 May 12, 2017
tools Merge pull request #2612 from ih4cku/master Apr 14, 2017
.Doxyfile update doxygen config to stop warnings Sep 3, 2014
.gitignore ignore generated includes for docs Jan 19, 2017
.travis.yml Stop setting cache timeout in TravisCI Jul 15, 2016
CMakeLists.txt Caffe 1.0 Apr 14, 2017 [docs] add which will appear on GitHub new Issue/PR p… Jul 30, 2015 BVLC -> BAIR Apr 14, 2017 installation questions -> caffe-users Oct 19, 2015
LICENSE copyright spans 2014-2017 Jan 19, 2017
Makefile Caffe 1.0 Apr 14, 2017
Makefile.config.example Updated readme Aug 16, 2018
caffe.cloc [fix] stop cloc complaint about cu type Sep 4, 2014

NEWS: a PyTorch version of the Weighted Batch Norm layers is available here!

This is the official Caffe implementation of Boosting Domain Adaptation by Discovering Latent Domains.

This code is forked from BVLC/caffe. For any issue not directly related to our additional layers, please refer to the upstream repository.

Additional layers

In this Caffe version, two additional layers are provided:


Allows to perform a weighted normalization with respect to one domain. Differently form standard BatchNormLayer it takes one more input, which is a weight vector of dimension equal to the batch size. This vector represents the probability that each sample belongs to the domain represented by this MultiModalBatchNormLayer. As an example, the syntax is the following:

    name: "wbn"
    type: "MultiModalBatchNorm"
    bottom: "input"
    bottom: "weights"
    top: "output"

In case we have 2 latent domains, the full mDA layer would be:

    name: "wbn1"
    type: "MultiModalBatchNorm"
    bottom: "input_1"
    bottom: "weights_1"
    top: "output_1"

    name: "wbn2"
    type: "MultiModalBatchNorm"
    bottom: "input_2"
    bottom: "weights_2"
    top: "output_2"

    name: "wbn"
    type: "Eltwise"
    bottom: "output_1"
    bottom: "output_2"
    top: "output"
        operation: SUM

Since the output of a MultiModalBatchNormLayer for each sample is already scaled for its probability, the final layer is a simple element-wise sum.


A simple entropy loss implementation with integrated softmax computation. We used the implementation of AutoDIAL.

Networks and solvers

Under models/latentDA we provide prototxts and solvers for the experiments reported in the paper. In particular the folder contains:

  • resnet18_k2.prototxt : the ResNet architecture used for the PACS experiments, with 2 latent domains.
  • alexnet_k2.prototxt : the AlexNet architecture used for the Office31 experiments, with 2 latent domains.
  • alexnet_sourcek2_targetk2.prototxt : the AlexNet architecture used for the Office-Caltech experiments in the multi-target scenario, with 2 latent domains for both source and target.
  • alexnet_k3.prototxt : the AlexNet architecture used for the Office-Caltech experiments in the multi-target scenario, with 3 latent domains.
  • solver_pacs.prototxt : the solver used for the PACS experiments.
  • solver_alexnet.prototxt : the solver used for both the Office31 and Office-Caltech experiments.

Notice that each of these files have some fields delimited by % which must be specified before their usage.

Abstract and citation

Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.

author = {Mancini, Massimilano and Porzi, Lorenzo and Rota Bul\`o, Samuel and Caputo, Barbara and Ricci, Elisa},
title  = {Boosting Domain Adaptation by Discovering Latent Domains},
booktitle = {Computer Vision and Pattern Recognition (CVPR)},
year      = {2018},
month     = {June}
You can’t perform that action at this time.