Skip to content

PhoebusSi/CTIR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CTIR (Learning Class-Transductive Intent Representations for Zero-shot Intent Detection)

Here is the implementation of our IJCAI-2021 Learning Class-Transductive Intent Representations for Zero-shot Intent Detection.

The Appendix mentioned in this paper are shown in Appendix.pdf

image

This repository contains code modified from here for CapsNet+CTIR, here for ZSDNN+CTIR and here for +LOF+CTIR, many thanks!

Download the Glove embedding file

cd data/nlu_data

You can download the Glove embedding file we used from here.

Train & Test (CapsNet+CTIR)

cd capsnet-CTIR

image

Train & Test for SNIP dataset in the ZSID setting

python main.py SNIP ZSID

Train & Test for SNIP dataset in the GZSID setting

python main.py SNIP GZSID

Train & Test for CLINC dataset in the ZSID setting

python main.py CLINC ZSID

Train & Test for CLINC dataset in the GZSID setting

python main.py CLINC GZSID

Train & Test (ZSDNN+CTIR)

cd zerodnn-CTIR

image

Train & Test for SNIP dataset in the ZSID setting

python zerodnn_main.py SNIP ZSID

Train & Test for SNIP dataset in the GZSID setting

python zerodnn_main.py SNIP GZSID

Train & Test for CLINC dataset in the ZSID setting

python zerodnn_main.py CLINC ZSID

Train & Test for CLINC dataset in the GZSID setting

python zerodnn_main.py CLINC GZSID

Train & Test (+LOF+CTIR in the GZSID setting)

The main idea of two-stage method for GZSID is to first determine whether an utterance belongs to unseen intents (i.e., Y_seen ), and then classify it into a specific intent class. This method bypasses the need to classify an input sentence among all the seen and unseen intents, thereby alleviating the domain shift problem. To verify the performance of integrating CTIR into the two-stage method, we design a new two-stage pipeline (+LOF+CTIR ). In Phase 1, a test utterance is classified into one of the classes from Y_seen ∪ { y_unseen } using the density-based algorithm LOF(LMCL) (refer here). In Phase2, we perform ZSID for the utterances that have been classified into y_unseen , using the CTIR methods such as CapsNet+CTIR, ZSDNN+CTIR.

Reference

If you found this code is useful, please cite the following paper:

@article{si2020learning,
  title={Learning Disentangled Intent Representations for Zero-shot Intent Detection},
  author={Si, Qingyi and Liu, Yuanxin and Fu, Peng and Li, Jiangnan and Lin, Zheng and Wang, Weiping},
  journal={arXiv preprint arXiv:2012.01721},
  year={2020}
}

About

Code of our IJCAI2021 paper: "Learning Class-Transductive Intent Representations for Zero-shot Intent Detection"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages