LogiTorch is a PyTorch-based library for logical reasoning on natural language, it consists of:
- Textual logical reasoning datasets
- Implementations of different logical reasoning neural architectures
- A simple and clean API that can be used with PyTorch Lightning
foo@bar:~$ pip install logitorch==0.0.1a2
Or
foo@bar:~$ pip install git+https://github.com/LogiTorch/logitorch.git
You can find the documentation for LogiTorch on ReadTheDocs.
Datasets implemented in LogiTorch:
- AR-LSAT (MIT LICENSE)
- ConTRoL (GitHub LICENSE)
- LogiQA (GitHub LICENSE)
- ReClor (Non-Commercial Research Use)
- RuleTaker (APACHE-2.0 LICENSE)
- ProofWriter (APACHE-2.0 LICENSE)
- SNLI (CC-BY-SA-4.0 LICENSE)
- MultiNLI (CC-BY-SA-4.0 LICENSE)
- RTE (TAC User Agreements)
- Negated SNLI (MIT LICENSE)
- Negated MultiNLI (MIT LICENSE)
- Negated RTE (MIT LICENSE)
- PARARULES Plus (MIT LICENSE)
- AbductionRules (MIT LICENSE)
- FOLIO (CC-BY-SA-4.0 LICENSE)
- FLD (CC-BY-SA-4.0 LICENSE)
- LogiQA2.0 (CC-BY-SA-4.0 LICENSE)
- LogiQA2.0 NLI
- HELP
- SimpleLogic
- RobustLR
- LogicNLI
Models implemented in LogiTorch:
- RuleTaker
- ProofWriter
- BERTNOT
- PRover
- FLDProver
- TINA
- FaiRR
- LReasoner
- DAGN
- Focal Reasoner
- AdaLoGN
- Logiformer
- LogiGAN
- MERit
- APOLLO
- LAMBADA
- Chainformer
- IDOL
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ModelCheckpoint
from torch.utils.data.dataloader import DataLoader
from logitorch.data_collators.ruletaker_collator import RuleTakerCollator
from logitorch.datasets.qa.ruletaker_dataset import RuleTakerDataset
from logitorch.pl_models.ruletaker import PLRuleTaker
train_dataset = RuleTakerDataset("depth-5", "train")
val_dataset = RuleTakerDataset("depth-5", "val")
ruletaker_collate_fn = RuleTakerCollator()
train_dataloader = DataLoader(
train_dataset, batch_size=32, collate_fn=ruletaker_collate_fn
)
val_dataloader = DataLoader(
val_dataset, batch_size=32, collate_fn=ruletaker_collate_fn
)
model = PLRuleTaker(learning_rate=1e-5, weight_decay=0.1)
checkpoint_callback = ModelCheckpoint(
save_top_k=1,
monitor="val_loss",
mode="min",
dirpath="models/",
filename="best_ruletaker",
)
trainer = pl.Trainer(callbacks=[checkpoint_callback], accelerator="gpu", gpus=1)
trainer.fit(model, train_dataloader, val_dataloader)
We provided pre-configured pipelines for some datasets to train models.
from logitorch.pipelines.qa_pipelines import ruletaker_pipeline
from logitorch.pl_models.ruletaker import PLRuleTaker
model = PLRuleTaker(learning_rate=1e-5, weight_decay=0.1)
ruletaker_pipeline(
model=model,
dataset_name="depth-5",
saved_model_name="models/",
saved_model_path="best_ruletaker",
batch_size=32,
epochs=10,
accelerator="gpu",
gpus=1,
)
from logitorch.pl_models.ruletaker import PLRuleTaker
from logitorch.datasets.qa.ruletaker_dataset import RULETAKER_ID_TO_LABEL
model = PLRuleTaker.load_from_checkpoint("models/best_ruletaker.ckpt")
context = "Bob is smart. If someone is smart then he is kind."
question = "Bob is kind."
pred = model.predict(context, question)
print(RULETAKER_ID_TO_LABEL[pred])
Users of LogiTorch should distinguish the datasets and models of our library from the originals. They should always credit and cite both our library and the original data source, as in ``We used LogiTorch's \cite{helwe2022logitorch} re-implementation of BERTNOT \cite{hosseini2021understanding}''.
If you want to cite LogiTorch, please refer to the publication in the Empirical Methods in Natural Language Processing:
@inproceedings{helwe2022logitorch,
title={LogiTorch: A PyTorch-based library for logical reasoning on natural language},
author={Helwe, Chadi and Clavel, Chlo\'e and Suchanek, Fabian},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year={2022}
}
This work was partially funded by ANR-20-CHIA-0012-01 (โNoRDFโ).