Home

Awesome

<div align="center"> <div align="center"> <a href="https://www.sysu-hcp.net/"> <img src="Images/HCP.png" width="400"/> </a> <a href=""> <img src="Images/LOGO.png" width="400"/> </a> </div> </div>

CausalVLR is a python open-source framework for causal relation discovery, causal inference that implements state-of-the-art causal learning algorithms for various visual-linguistic reasoning tasks, such as VQA, Image/Video Captioning, Model Generalization and Robustness, Medical Report Generation, etc.

PyPI docs license open issues issue resolution

πŸ“˜Documentation | πŸ› οΈInstallation | πŸ‘€Model Zoo | πŸ†•Update News | πŸš€Ongoing Projects | πŸ€”Reporting Issues

</div> <!-- > [![badge](https://github.com/HCPLab-SYSU/CausalVLR/workflows/build/badge.svg)](https://github.com/HCPLab-SYSU/CausalVLR/actions) </-->

<a id="table-of-contents">πŸ“„ Table of Contents </a>

<a id="introduction">πŸ“š Introduction <a href="#table-of-contents">πŸ”</a> </a>

CausalVLR is a python open-source framework based on PyTorch for causal relation discovery, causal inference that implements state-of-the-art causal learning algorithms for various visual-linguistic reasoning tasks, detail see on Documentation.

<div> <p> </p> </div> <div align="center"><font size=5> Framework Overview </font> </div>

Image

<details open> <summary>Major features</summary> </details>

❗ Note: The framework is actively being developed. Feedbacks (issues, suggestions, etc.) are highly encouraged.

<a id="whats-new">πŸš€ What's New <a href="#table-of-contents">πŸ”</a> </a>

πŸ”₯ 2024.04.07.

πŸ”₯ 2023.12.12.

πŸ”₯ 2023.8.19.

πŸ”₯ 2023.6.29.


<div> <br>

✨ CaCo-CoT-Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs

<div align=center>

Image

</div> <div align="center">
MethodScienceQACom2senceBoolQ
GPT-3.5-turbo79.370.171.7
CoT78.463.671.1
SC-CoT84.066.071.4
C-CoT82.568.870.5
CaCo-CoT86.5(+2.5)73.5(3.4)73.5(1.8)
</div>

✨ VLCI-Visual Causal Intervention for Radiology Report Generation

<div align=center>

Image

</div> <div align="center">
DatasetB@1B@2B@3B@4MeteorRough-LCIDEr
IU-Xray50.533.424.518.920.439.745.6
MIMIC-CXR40.024.516.511.915.028.019.0
</div> </div> <!-- div align="center"> | Dataset | What | Why | How | When | Where | All | | --------- | ---- | --- | ---- |----- |-------| ---- | | MSVD-QA | 33.1 | 58.9 | 84.3 | 77.5 | 42.8 | 43.7 | | MSRVTT-QA | 32.2 | 50.2 | 82.3 | 78.4 | 38.0 | 38.9 | </div --> <br>

✨ CMCIR-Cross-modal Causal Intervention for Event-level Video Question Answering

<div align=center>

Image

</div> <!-- div>
DatasetWhatWhyHowWhenWhereAll
MSVD-QA33.158.984.377.542.843.7
MSRVTT-QA32.250.282.378.438.038.9
</div --> <div align="center">
MethodBasicAttributionIntrospectionCounterfactualForecastingReverseAll
VQAC34.0249.4334.4439.7438.5549.7336.00
MASN33.8350.8634.2341.0641.5750.8036.03
DualVGR33.9150.5733.4041.3941.5750.6236.07
HCRN34.1750.2933.4040.7344.5850.0936.26
CMCIR36.10 (+1.93)52.59 (+1.73)38.38 (+3.94)46.03 (+4.64)48.80 (+4.22)52.21 (+1.41)38.58 (+1.53)
</div>

<a id="get-started">πŸ‘¨β€πŸ« Getting Started <a href="#table-of-contents">πŸ”</a> </a>

Please see Overview for the general introduction of <a hraf="">CausalVLR</a>.

For detailed user guides and advanced guides, please refer to our documentation, and here is the code structure of toolbox.

Image

Installation

Please refer to Installation for installation instructions in documentation.

Briefly, to use CausalVLR, we could install it using pip:

git clone https://github.com/HCPLab-SYSU/CausalVLR.git
pip install -e .

or install from PyPI:

pip install hcpcvlr

Running examples

For causal discovery, there are various running examples in the test directory.

For the implemented modules, we provide unit tests for the convenience of developing your own methods.

<h2 id="model-zoo">πŸ‘€ Model Zoo <a href="#table-of-contents">πŸ”</a> </h2>

Please feel free to let us know if you have any recommendation regarding datasets with high-quality. We are grateful for any effort that benefits the development of causality community.

<div align="center">
TaskModelBenchmark
Medical Report GenerationVLCIIU-Xray, MIMIC-CXR
VQACMCIRSUTD-TrafficQA, TGIF-QA, MSVD-QA, MSRVTT-QA
Visual Causal Scene DiscoveryVCSRNExT-QA, Causal-VidQA, and MSRVTT-QA
Model Generalization and RobustnessRobust Fine-tuningImageNet-V2, ImageNet-R, ImageNet-Sketch, ObjectNet, ImageNet-A
Causality-Aware Medical DiagnosisCAMDAMuZhi, DingXiang
Faithful Reasoning in LLMsCaCo-CoTScienceQA, Com2Sense, BoolQ
</div>

<a id="license"> 🎫 License <a href="#table-of-contents">πŸ”</a> </a>

This project is released under the <a hraf="https://github.com/HCPLab-SYSU/CausalVLR/LICENSE">Apache 2.0 license</a>.

<a id="citation"> πŸ–ŠοΈ Citation <a href="#table-of-contents">πŸ”</a> </a>

If you find this project useful in your research, please consider cite:

@misc{liu2023causalvlr,
      title={CausalVLR: A Toolbox and Benchmark for Visual-Linguistic Causal Reasoning}, 
      author={Yang Liu and Weixing Chen and Guanbin Li and Liang Lin},
      year={2023},
      eprint={2306.17462},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

<a id="contribution"> πŸ™Œ Contribution <a href="#table-of-contents">πŸ”</a> </a>

Please feel free to open an issue if you find anything unexpected. We are always targeting to make our community better!

<a id="acknowledgement"> 🀝 Acknowledgement <a href="#table-of-contents">πŸ”</a> </a>

CausalVLR is an open-source project and We appreciate all the contributors who implement their methods or add new features and users who give valuable feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their new models.

πŸͺ The review paper here can provide some help

Causal Reasoning Meets Visual Representation Learning: A Prospective Study

Machine Intelligence Research (MIR) 2022
A Review paper for causal reasoning and visual representation learning
Image

@article{
  liu2022causal,
  title={Causal Reasoning Meets Visual Representation Learning: A Prospective Study},
  author={Liu, Yang and Wei, Yu-Shen and Yan, Hong and Li, Guan-Bin and Lin, Liang},
  journal={Machine Intelligence Research},
  pages={1--27},
  year={2022},
  publisher={Springer}
  }

<a id="hcp">πŸ—οΈ Projects in HCPLab<a href="#table-of-contents">πŸ”</a> </a>