Skip to content
PyTorch deep learning models for document classification
Branch: master
Clone or download
achyudh Add relevance transfer package (#14)
* Add TREC relevance datasets

* Add relevance transfer trainer and evaluator

* Add re-ranking module

* Add ImbalancedDatasetSampler

* Add relevance transfer package

* Fix import in classification trainer
Latest commit 99a01c6 Apr 20, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
common Add relevance transfer package (#14) Apr 20, 2019
datasets Add relevance transfer package (#14) Apr 20, 2019
docs Integrate BERT into Hedwig (#29) (#11) Apr 14, 2019
models Update READMEs (#13) Apr 19, 2019
tasks Add relevance transfer package (#14) Apr 20, 2019
utils Integrate BERT into Hedwig (#29) (#11) Apr 14, 2019
.gitignore
LICENSE
README.md Update READMEs (#13) Apr 19, 2019
__init__.py
requirements.txt Integrate BERT into Hedwig (#29) (#11) Apr 14, 2019
setup.py Fix package imports Mar 21, 2019

README.md

This repo contains PyTorch deep learning models for document classification, implemented by the Data Systems Group at the University of Waterloo.

Models

Each model directory has a README.md with further details.

Setting up PyTorch

Hedwig is designed for Python 3.6 and PyTorch 0.4. PyTorch recommends Anaconda for managing your environment. We'd recommend creating a custom environment as follows:

$ conda create --name castor python=3.6
$ source activate castor

And installing PyTorch as follows:

$ conda install pytorch=0.4.1 cuda92 -c pytorch

Other Python packages we use can be installed via pip:

$ pip install -r requirements.txt

Code depends on data from NLTK (e.g., stopwords) so you'll have to download them. Run the Python interpreter and type the commands:

>>> import nltk
>>> nltk.download()

Datasets

Download the Reuters, AAPD and IMDB datasets, along with word2vec embeddings from hedwig-data.

$ git clone https://github.com/castorini/hedwig.git
$ git clone https://git.uwaterloo.ca/jimmylin/hedwig-data.git

Organize your directory structure as follows:

.
├── hedwig
└── hedwig-data

After cloning the hedwig-data repo, you need to unzip the embeddings and run the preprocessing script:

cd hedwig-data/embeddings/word2vec 
gzip -d GoogleNews-vectors-negative300.bin.gz 
python bin2txt.py GoogleNews-vectors-negative300.bin GoogleNews-vectors-negative300.txt 

If you are an internal Hedwig contributor using the machines in the lab, follow the instructions here.

You can’t perform that action at this time.