Skip to content
Caver: a toolkit for multilabel text classification.
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github
caver
dataset
docs
examples
.gitignore
.travis.yml
LICENSE
README.md
README_zh.md
preprocess.py
requirements.txt
requirements_docs.txt
setup.py
train.py

README.md

Caver

Rising a torch in the cave to see the words on the wall, tag your short text in 3 lines. Caver uses Facebook's PyTorch project to make the implementation easier.

Pypi package GitHub release GitHub issues Travis CI

DemoRequirementsInstallPre-trained modelsTrainExamplesDocument

Quick Demo

from caver import CaverModel
model = CaverModel("./checkpoint_path")

sentence = ["看 美 剧 学 英 语 靠 谱 吗",
            "科 比 携 手 姚 明 出 任 2019 篮 球 世 界 杯 全 球 大 使",
            "如 何 在 《 权 力 的 游 戏 》 中 苟 到 最 后",
            "英 雄 联 盟 LPL 夏 季 赛 RNG 能 否 击 败 TOP 战 队"]

model.predict([sentence[0]], top_k=3)
>>> ['美剧', '英语', '英语学习']

model.predict([sentence[1]], top_k=5)
>>> ['篮球', 'NBA', '体育', 'NBA 球员', '运动']

model.predict([sentence[2]], top_k=7)
>>> ['权力的游戏(美剧)', '美剧', '影视评论', '电视剧', '电影', '文学', '小说']

model.predict([sentence[3]], top_k=6)
>>> ['英雄联盟(LoL)', '电子竞技', '英雄联盟职业联赛(LPL)', '游戏', '网络游戏', '多人联机在线竞技游戏 (MOBA)']

Requirements

  • PyTorch
  • tqdm
  • torchtext
  • numpy
  • Python3

Install

$ pip install caver --user

Did you guys have some pre-trained models

Yes, we have released two pre-trained models on Zhihu NLPCC2018 opendataset.

If you want to use the pre-trained model for performing text tagging, you can download it (along with other important inference material) from the Caver releases page. Alternatively, you can run the following command to download and unzip the files in your current directory:

$ wget -O - https://github.com/guokr/Caver/releases/download/0.1/checkpoints_char_cnn.tar.gz | tar zxvf -
$ wget -O - https://github.com/guokr/Caver/releases/download/0.1/checkpoints_char_lstm.tar.gz | tar zxvf -

How to train on your own dataset

$ python3 train.py --input_data_dir {path to your origin dataset}
                   --output_data_dir {path to store the preprocessed dataset}
                   --train_filename train.tsv
                   --valid_filename valid.tsv
                   --checkpoint_dir {path to save the checkpoints}
                   --model {fastText/CNN/LSTM}
                   --batch_size {16, you can modify this for you own}
                   --epoch {10}

More Examples

It's updating, but basically you can check examples.

You can’t perform that action at this time.