Home

Awesome

<p align="center"> <a href="https://swebench.com"> <img src="assets/figures/swellama_banner.svg" style="height: 10em" alt="Kawi the SWE-Llama" /> </a> </p> <div align="center">

| 日本語 | English | 中文简体 | 中文繁體 |

</div>
<p align="center"> Code and data for our ICLR 2024 paper <a href="http://swe-bench.github.io/paper.pdf">SWE-bench: Can Language Models Resolve Real-World GitHub Issues?</a> </br> </br> <a href="https://www.python.org/"> <img alt="Build" src="https://img.shields.io/badge/Python-3.8+-1f425f.svg?color=purple"> </a> <a href="https://copyright.princeton.edu/policy"> <img alt="License" src="https://img.shields.io/badge/License-MIT-blue"> </a> <a href="https://badge.fury.io/py/swebench"> <img src="https://badge.fury.io/py/swebench.svg"> </a> </p>

Please refer our website for the public leaderboard and the change log for information on the latest updates to the SWE-bench benchmark.

📰 News

👋 Overview

SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub. Given a codebase and an issue, a language model is tasked with generating a patch that resolves the described problem.

<img src="assets/figures/teaser.png">

To access SWE-bench, copy and run the following code:

from datasets import load_dataset
swebench = load_dataset('princeton-nlp/SWE-bench', split='test')

🚀 Set Up

SWE-bench uses Docker for reproducible evaluations. Follow the instructions in the Docker setup guide to install Docker on your machine. If you're setting up on Linux, we recommend seeing the post-installation steps as well.

Finally, to build SWE-bench from source, follow these steps:

git clone git@github.com:princeton-nlp/SWE-bench.git
cd SWE-bench
pip install -e .

Test your installation by running:

python -m swebench.harness.run_evaluation \
    --predictions_path gold \
    --max_workers 1 \
    --instance_ids sympy__sympy-20590 \
    --run_id validate-gold

💽 Usage

[!WARNING] Running fast evaluations on SWE-bench can be resource intensive We recommend running the evaluation harness on an x86_64 machine with at least 120GB of free storage, 16GB of RAM, and 8 CPU cores. You may need to experiment with the --max_workers argument to find the optimal number of workers for your machine, but we recommend using fewer than min(0.75 * os.cpu_count(), 24).

If running with docker desktop, make sure to increase your virtual disk space to have ~120 free GB available, and set max_workers to be consistent with the above for the CPUs available to docker.

Support for arm64 machines is experimental.

Evaluate model predictions on SWE-bench Lite using the evaluation harness with the following command:

python -m swebench.harness.run_evaluation \
    --dataset_name princeton-nlp/SWE-bench_Lite \
    --predictions_path <path_to_predictions> \
    --max_workers <num_workers> \
    --run_id <run_id>
    # use --predictions_path 'gold' to verify the gold patches
    # use --run_id to name the evaluation run

This command will generate docker build logs (logs/build_images) and evaluation logs (logs/run_evaluation) in the current directory.

The final evaluation results will be stored in the evaluation_results directory.

To see the full list of arguments for the evaluation harness, run:

python -m swebench.harness.run_evaluation --help

Additionally, the SWE-Bench repo can help you:

⬇️ Downloads

DatasetsModels
🤗 SWE-bench🦙 SWE-Llama 13b
🤗 "Oracle" Retrieval🦙 SWE-Llama 13b (PEFT)
🤗 BM25 Retrieval 13K🦙 SWE-Llama 7b
🤗 BM25 Retrieval 27K🦙 SWE-Llama 7b (PEFT)
🤗 BM25 Retrieval 40K
🤗 BM25 Retrieval 50K (Llama tokens)

🍎 Tutorials

We've also written the following blog posts on how to use different parts of SWE-bench. If you'd like to see a post about a particular topic, please let us know via an issue.

💫 Contributions

We would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues! To do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!

Contact person: Carlos E. Jimenez and John Yang (Email: carlosej@princeton.edu, johnby@stanford.edu).

✍️ Citation

If you find our work helpful, please use the following citations.

@inproceedings{
    jimenez2024swebench,
    title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
    author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=VTF8yNQM66}
}

🪪 License

MIT. Check LICENSE.md.