Home

Awesome

Causal Analysis of Mathematical Reasoning in Neural Language Models

This repository contains the code for the paper "A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models".

Alessandro Stolfo*, Zhijing Jin*, Kumar Shridhar, Bernhard Schölkopf, Mrinmaya Sachan

Requirements

Experiments

The intervention experiments can be run by setting the desired parameters in run_numeracy_exp.sh and then executing sh run_numeracy_exp.sh.

Intervention Types

The parameters representing the different intervention/effect combinations are the following:

Models

The models on which this repo was tested are:

For GPT-3, the organization key and secret key are necessary to access the OpenAI APIs. If you want to experiment with it, paste the keys as the first (organization) and second (secret) lines in the openai_keys.txt file.

All the other models are accessed through Huggingface Transformers.

GPT-Neo-2.7B requires a GPU with at least 24GB of memory, and GPT-J-6B requires a GPU with at least 32 GB of memory.

Heatmaps

To obtain the data needed to plot the heatmaps reported in the paper, execute

python heatmap_experiments/heatmap_experiment.py [model] [device] [out_dir] not_random arabic [seed] statement [data_path] [max_n] disabled + [n_templates]"

This will store the average probabilities assigned to each possible groud-truth result in the range 0,...,max_n as a .csv file.

Other Papers / Resources

Part of this repo is based on the code for Finlayson et al. (2020)