Home

Awesome

Backward-traceable AI-driven Research

<picture> <img src="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/data_to_paper_icon.gif" width="350" align="right"> </picture>

License: MIT Hits

data-to-paper is an automation framework that systematically navigates interacting AI agents through a complete end-to-end scientific research, starting from raw data alone and concluding with transparent, backward-traceable, human-verifiable scientific papers (<a href="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/ExampleManuscriptFigures.pdf" target="_blank">Example AI-created paper</a>, Copilot App DEMO). This repository is the code implementation for the paper "Autonomous LLM-Driven Research — from Data to Human-Verifiable Research Papers".

<picture> <img src="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/AI-Human-agents.png" width="300" align="left"> </picture>

Try it out

pip install data-to-paper

then run: data-to-paper

See INSTALL for dependencies. <br clear="left"/>

<picture> <img src="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/page_flipping.gif" width="400" align="right"> </picture>

Key features

<picture> <img src="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/research-steps-horizontal.png" width=100% align="left"> </picture> <br><br>

https://github.com/Technion-Kishony-lab/data-to-paper/assets/31969897/0f3acf7a-a775-43bd-a79c-6877f780f2d4

Motivation: Building a new standard for Transparent, Traceable, and Verifiable AI-driven Research

The data-to-paper framework is created as a research project to understand the capacities and limitations of LLM-driven scientific research, and to develop ways of harnessing LLM to accelerate research while maintaining, and even enhancing, the key scientific values, such as transparency, traceability and verifiability, and while allowing scientist to oversee and direct the process (see also: living guidelines).

Implementation

Towards this goal, data-to-paper systematically guides interacting LLM and rule-based agents through the conventional scientific path, from annotated data, through creating research hypotheses, conducting literature search, writing and debugging data analysis code, interpreting the results, and ultimately the step-by-step writing of a complete research paper.

Reference

The data-to-paper framework is described in the following NEJM AI paper:

and in the following pre-print:

Examples

We ran data-to-paper on the following test cases:

Try out:

data-to-paper diabetes

Try out:

data-to-paper social_network

Try out:

data-to-paper npr_nicu

We defined three levels of difficulty for the research question for this paper.

  1. easy: Compare two ML methods for predicting optimal intubation depth
    Try out:
data-to-paper ML_easy
  1. medium: Compare one ML method and one formula-based method for predicting optimal intubation depth
    Try out:
data-to-paper ML_medium
  1. hard: Compare 4 ML methods with 3 formula-based methods for predicting optimal intubation depth
    Try out:
data-to-paper ML_hard

Contributing

We invite people to try out data-to-paper with their own data and are eager for feedback and suggestions. It is currently designed for relatively simple research goals and simple datasets, where we want to raise and test a statistical hypothesis.

We also invite people to help develop and extend the data-to-paper framework in science or other fields.

Important notes

Disclaimer. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise, especially, but not limited to, the consequences of running of LLM created code on your local machine. The developers of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software.

Accountability. You are solely responsible for the entire content of created manuscripts including their rigour, quality, ethics and any other aspect. The process should be overseen and directed by a human-in-the-loop and created manuscripts should be carefully vetted by a domain expert. The process is NOT error-proof and human intervention is necessary to ensure accuracy and the quality of the results.

Compliance. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from using this software. Further, data-to-paper manuscripts are watermarked for transparency as AI-created. Users should not remove this watermark.

Token Usage. Please note that the use of most language models through external APIs, especially GPT4, can be expensive due to its token usage. By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.

Related projects

Here are some other cool multi-agent related projects:

And also this curated list of awesome-agents.