Awesome
LogiTorch
LogiTorch is a PyTorch-based library for logical reasoning on natural language, it consists of:
- Textual logical reasoning datasets
- Implementations of different logical reasoning neural architectures
- A simple and clean API that can be used with PyTorch Lightning
π¦ Installation
foo@bar:~$ pip install logitorch==0.0.1a2
Or
foo@bar:~$ pip install git+https://github.com/LogiTorch/logitorch.git
π Documentation
You can find the documentation for LogiTorch on ReadTheDocs.
π₯οΈ Features
π Datasets
Datasets implemented in LogiTorch:
- AR-LSAT <sub>(MIT LICENSE)</sub>
- ConTRoL <sub>(GitHub LICENSE)</sub>
- LogiQA <sub>(GitHub LICENSE)</sub>
- ReClor <sub>(Non-Commercial Research Use)</sub>
- RuleTaker <sub>(APACHE-2.0 LICENSE)</sub>
- ProofWriter <sub>(APACHE-2.0 LICENSE)</sub>
- SNLI <sub>(CC-BY-SA-4.0 LICENSE)</sub>
- MultiNLI <sub>(CC-BY-SA-4.0 LICENSE)</sub>
- RTE <sub>(TAC User Agreements)</sub>
- Negated SNLI <sub>(MIT LICENSE)</sub>
- Negated MultiNLI <sub>(MIT LICENSE)</sub>
- Negated RTE <sub>(MIT LICENSE)</sub>
- PARARULES Plus <sub>(MIT LICENSE)</sub>
- AbductionRules <sub>(MIT LICENSE)</sub>
- FOLIO <sub>(CC-BY-SA-4.0 LICENSE)</sub>
- FLD <sub>(CC-BY-SA-4.0 LICENSE)</sub>
- LogiQA2.0 <sub>(CC-BY-SA-4.0 LICENSE)</sub>
- LogiQA2.0 NLI
- HELP
- SimpleLogic
- RobustLR
- LogicNLI
π€ Models
Models implemented in LogiTorch:
- RuleTaker
- ProofWriter
- BERTNOT
- PRover
- FLDProver
- TINA
- FaiRR
- LReasoner
- DAGN
- Focal Reasoner
- AdaLoGN
- Logiformer
- LogiGAN
- MERit
- APOLLO
- LAMBADA
- Chainformer
- IDOL
π§ͺ Example Usage
Training Example
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ModelCheckpoint
from torch.utils.data.dataloader import DataLoader
from logitorch.data_collators.ruletaker_collator import RuleTakerCollator
from logitorch.datasets.qa.ruletaker_dataset import RuleTakerDataset
from logitorch.pl_models.ruletaker import PLRuleTaker
train_dataset = RuleTakerDataset("depth-5", "train")
val_dataset = RuleTakerDataset("depth-5", "val")
ruletaker_collate_fn = RuleTakerCollator()
train_dataloader = DataLoader(
train_dataset, batch_size=32, collate_fn=ruletaker_collate_fn
)
val_dataloader = DataLoader(
val_dataset, batch_size=32, collate_fn=ruletaker_collate_fn
)
model = PLRuleTaker(learning_rate=1e-5, weight_decay=0.1)
checkpoint_callback = ModelCheckpoint(
save_top_k=1,
monitor="val_loss",
mode="min",
dirpath="models/",
filename="best_ruletaker",
)
trainer = pl.Trainer(callbacks=[checkpoint_callback], accelerator="gpu", gpus=1)
trainer.fit(model, train_dataloader, val_dataloader)
Pipeline Example
We provided pre-configured pipelines for some datasets to train models.
from logitorch.pipelines.qa_pipelines import ruletaker_pipeline
from logitorch.pl_models.ruletaker import PLRuleTaker
model = PLRuleTaker(learning_rate=1e-5, weight_decay=0.1)
ruletaker_pipeline(
model=model,
dataset_name="depth-5",
saved_model_name="models/",
saved_model_path="best_ruletaker",
batch_size=32,
epochs=10,
accelerator="gpu",
gpus=1,
)
Testing Example
from logitorch.pl_models.ruletaker import PLRuleTaker
from logitorch.datasets.qa.ruletaker_dataset import RULETAKER_ID_TO_LABEL
model = PLRuleTaker.load_from_checkpoint("models/best_ruletaker.ckpt")
context = "Bob is smart. If someone is smart then he is kind."
question = "Bob is kind."
pred = model.predict(context, question)
print(RULETAKER_ID_TO_LABEL[pred])
Citing
Users of LogiTorch should distinguish the datasets and models of our library from the originals. They should always credit and cite both our library and the original data source, as in ``We used LogiTorch's \cite{helwe2022logitorch} re-implementation of BERTNOT \cite{hosseini2021understanding}''.
If you want to cite LogiTorch, please refer to the publication in the Empirical Methods in Natural Language Processing:
@inproceedings{helwe2022logitorch,
title={LogiTorch: A PyTorch-based library for logical reasoning on natural language},
author={Helwe, Chadi and Clavel, Chlo\'e and Suchanek, Fabian},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year={2022}
}
Acknowledgments
This work was partially funded by ANR-20-CHIA-0012-01 (βNoRDFβ).