Skip to content
Source code for "Multi-Purposing Domain Adaptation Discriminators for Pseudo Labeling Confidence"
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Multi-Purposing Domain Adaptation Discriminators for Pseudo Labeling Confidence

Method: instead of using the task classifier's softmax confidence for weighting samples for pseudo labeling, use the discriminator's / domain classifier's confidence based on how source-like the feature representations of the samples appear. In other words, we multi-purpose the discriminator to not only aid in producing domain-invariant representations (like in DANN) but also to provide pseudo labeling confidence.


  • Download and preprocess datasets (datasets/
  • Optionally view the datasets (datasets/
  • Train models ( or kamiak_train.srun)
  • Evaluate models ( or kamiak_eval.srun)


For example, to train on USPS to MNIST with no adaptation:

./ test1 --model=vada_small --source=usps --target=mnist --method=none

To pseudo label weighting with the domain classifier's confidence (proposed method) or the task classifier's softmax confidence:

./ test1 --model=vada_small --source=usps --target=mnist --method=pseudo
./ test1 --model=vada_small --source=usps --target=mnist --method=pseudo --nouse_domain_confidence --debugnum=1

To instead do instance weighting:

./ test1 --model=vada_small --source=usps --target=mnist --method=instance
./ test1 --model=vada_small --source=usps --target=mnist --method=instance --nouse_domain_confidence --debugnum=1

Or, these but without adversarial training:

./ test2 --model=vada_small --source=usps --target=mnist --method=pseudo --nodomain_invariant
./ test2 --model=vada_small --source=usps --target=mnist --method=pseudo --nouse_domain_confidence --debugnum=1 --nodomain_invariant

Note: you probably need --nocompile_metrics on any SynSigns to GTSRB adaptation, otherwise it may run out of memory. Also, these examples assume you're using SLURM. If not, you can modify to not queue with sbatch but run with bash.


For example, to evaluate the above "test1" trained models:

sbatch kamiak_eval.srun test1 --eval_batch=2048 --jobs=1
You can’t perform that action at this time.