Awesome
NeuroX Toolkit
<p align="center"> <img src="https://github.com/fdalvi/NeuroX/raw/master/docs/intro/logo.png" /> </p>NeuroX provide all the necessary tooling to perform Interpretation and Analysis of (Deep) Neural Networks centered around Probing. Specifically, the toolkit provides:
- Support for extraction of activation from popular models including the entirety of transformers, with extended support for other models like OpenNMT-py planned in the near future
- Support for training linear probes on top of these activations, on the entire activation space of a model, on specific layers, or even on specific set of neurons.
- Support for neuron extraction related to specific concepts, using the Linear Correlation Analysis method (What is one Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models.). The toolkit can extract either a local ranking of neurons important to a particular target class, or a global ranking of neurons important to all the target classes.
- Support for ablation analysis by either removing or zeroing out specific neurons to determine their function and importance.
- Support for subword and character level aggregation across a variety of tokenizers, including BPE and all tokenizers in the transformers library.
- Support for activation visualization over regular text, to generate qualitative samples of neuron activity over particular sentences.
A demo using a lot of functionality provided by this toolkit is available.
Getting Started
This toolkit requires and is tested on Python versions 3.6 and above. It may work with older Python versions with some fiddling, but is currently not tested nor supported. The easiest way to get started is to use the published pip package:
pip install neurox
Manual Installation
If you wish to install this package manually (e.g. to modify or contribute to the code base), you can clone this repository into a directory of your choice:
git clone https://github.com/fdalvi/NeuroX.git
Create and activate a new virtual environment for the toolkit (This step can be skipped if you manage your environment in other ways like Conda or System-level installations):
python -m venv .neurox-env
source .neurox-env/bin/activate
Install the dependencies required to run the toolkit:
pip install -e .
Sample Code
A Jupyter notebook with a complete example of extracting activations from BERT, training a toy task, extracting neurons and visualizing them is available in the examples directory for a quick introduction to the main functionality provided by this toolkit.
Documentation
API Reference contains an API reference for all of the functions exposed by this toolkit. Primarily, the toolkit's functionality is separated into several high-level components:
- Extraction
- Data Preprocessing
- Linear Probing
- Neuron extraction and interpretation
- Neuron cluster analysis
- Visualization
Citation
Please cite our paper published at AAAI'19 if you use this toolkit.
@article{dalvi2019neurox,
title={NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks},
author={Dalvi, Fahim
and Nortonsmith, Avery
and Bau, D Anthony
and Belinkov, Yonatan
and Sajjad, Hassan
and Durrani, Nadir
and Glass, James},
journal={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
year={2019}
}
Planned features
- Pip package
- Support for OpenNMT-py models
- Support for control tasks and computing metrics like selectivity
- Support for attention and other module analysis
Publications
- Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, Nadir Durrani (2021). Fine-grained Interpretation and Causation Analysis in Deep NLP Models. In Proceedings of the 18th Annual Conference of the North American Chapter of the Association of Computational Linguistics: Human Language Technologies (NAACL), Virtual, June
- Nadir Durrani, Hassan Sajjad, Fahim Dalvi (2021). How transfer learning impacts linguistic knowledge in deep NLP models? In Findings of the Association for Computational Linguistics (ACL-IJCNLP). Virtual, August
- Yonatan Belinkov*, Nadir Durrani*, Fahim Dalvi, Hassan Sajjad, Jim Glass (2020). On the Linguistic Representational Power of Neural Machine Translation Models. Computational Linguistics. 46(1), pages 1 to 57 (*Equal Contribution––Alphabetic Order).
- Nadir Durrani, Hassan Sajjad, Fahim Dalvi, Yonatan Belinkov (2020). Analyzing Individual Neurons in Pre-trained Language Models. In Proceedings of the 17th Conference on Empirical Methods in Natural Language Processing (EMNLP), Punta Cana, Dominican Republic, November.
- Fahim Dalvi, Hassan Sajjad, Nadir Durrani, Yonatan Belinkov (2020). Analyzing Redundancy in Pretrained Transformer Models. In Proceedings of the 17th Conference on Empirical Methods in Natural Language Processing (EMNLP), Punta Cana, Dominican Republic, November.
- John M Wu*, Yonatan Belinkov*, Hassan Sajjad, Nadir Durrani, Fahim Dalvi and James Glass (2020). Similarity Analysis of Contextual Word Representation Models. In Proceedings of the 58th Annual Conference of the Association for Computational Linguistics (ACL). Seattle, USA, July (*Equal Contribution––Alphabetic Order).
- Anthony Bau*, Yonatan Belinkov*, Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and James Glass (2019). Identifying and Controlling Important Neurons in Neural Machine Translation. In Proceedings of the 7th International Conference on Learning Representations (ICLR). New Orleans, USA, May (*Equal Contribution––Alphabetic Order).
- Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, and Preslav Nakov (2019). One Size Does Not Fit All: Comparing NMT Representations of Different Granularities. In Proceedings of the 17th Annual Conference of the North American Chapter of the Association of Computational Linguistics: Human Language Technologies (NAACL), Minneapolis, US, June
- Fahim Dalvi*, Nadir Durrani*, Hassan Sajjad*, Yonatan Belinkov, D. Anthony Bau, and James Glass (2019). What is one Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI). Honolulu, USA, Jan. (*Equal Contribution––Alphabetic Order).
- Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass (2017). What do Neural Machine Translation Models Learn about Morphology? In Proceedings of the 55th Annual Conference of the Association for Computational Linguistics (ACL), Vancouver, Canada, July.
- Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov and Stephan Vogel (2017). Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder. In Proceedings of the 8th International Conference on Natural Language Processing (IJCNLP), Taipei, Taiwan, November.
- Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi and James Glass (2017). Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks. In Proceedings of the 8th International Conference on Natural Language Processing (IJCNLP), Taipei, Taiwan, November