Skip to content
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective

PyTorch code for the following paper

Kai Li, Martin Renqiang Min, and Yun Fu. "Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective", ICCV, 2019. [pdf]


Zero-shot learning (ZSL) aims to recognize instances of unseen classes solely based on the semantic descriptions of the classes. Existing algorithms usually formulate it as a semantic-visual correspondence problem, by learning mappings from one feature space to the other. Despite being reasonable, previous approaches essentially discard the highly precious discriminative power of visual features in an implicit way, and thus produce undesirable results. We instead reformulate ZSL as a conditioned visual classification problem, i.e., classifying visual features based on the classifiers learned from the semantic descriptions. With this reformulation, we develop algorithms targeting various ZSL settings: For the conventional setting, we propose to train a deep neural network that directly generates visual feature classifiers from the semantic attributes with an episode-based training scheme; For the generalized setting, we concatenate the learned highly discriminative classifiers for seen classes and the generated classifiers for unseen classes to classify visual features of all classes; For the transductive setting, we exploit unlabeled data to effectively calibrate the classifier generator using a novel learning-without-forgetting self-training mechanism and guide the process by a robust generalized cross-entropy loss. Extensive experiments show that our proposed algorithms significantly outperform state-of-the-art methods by large margins on most benchmark datasets in all the ZSL settings.


We recommended the following dependencies.

  • Python 3.5
  • PyTorch (0.4.1)


Download data from here and unzip the data in the project home directory.


Inductive setting

python --dataset AwA1 --ways 16 --shots 4 --lr 1e-5 --opt_decay 1e-4 --step_size 500 --log_file eps_lr5_opt4_ss500_w16_s4 --model_file

Transductive setting

python --dataset AwA1 --ways 16 --shot 1 --lr 1e-4 --opt_decay 1e-5 --step_size 200 --loss_q 5e-1 --trans_model_name trans_s1w16_lr4_opt5_ss200_q5e1 --log_file trans_s1w16_lr4_opt5_ss200_q5e1

Evaluate trained models

Inductive setting

python --dataset AwA1 --model_file

Transductive setting

python --dataset AwA1 --trans_model_name


If you found this code useful, please cite the following paper:

  title={Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective},
  author={Li, Kai and Min, Martin Renqiang and Fu, Yun},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},


Apache License 2.0


PyTorch Implementation for ICCV19 paper "Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective"



No releases published


No packages published


You can’t perform that action at this time.