Home

Awesome

RAM

Introduction

This repository focuses on developing advanced algorithms and methods for RAM (Reasoning, Alignment, Memory).

<!--- - [System 2 Attention (is something you might need too)](https://arxiv.org/pdf/2311.11829.pdf) - [Some things are more CRINGE than others: Preference Optimization with the Pairwise Cringe Loss](https://arxiv.org/pdf/2312.16682.pdf) ## Installation To install the project, clone the repository and install the required dependencies: ```bash git clone https://github.com/facebookresearch/RAM.git cd RAM pip install -r requirements.txt ``` -->

Projects

Please go to Projects for a up-to-date list of projects released by RAM.

<!--- ## Data The data needed to run our code is hosted on HuggingFace: - https://huggingface.co/OpenAssistant - https://huggingface.co/datasets/tatsu-lab/alpaca_eval ## Model The library needed to run our code is - [Llama from HuggignFace] (https://huggingface.co/docs/transformers/main/model_doc/llama?fbclid=IwAR2ZRhVnuKqngWTBjhOhuDgQLQ5yzTh573uAA_16bEMX3lerKSHCtdla31w).To run huggingface Llama models, make sure to convert your LLaMA checkpoint and tokenizer into HuggingFace format and store it at <your_path_to_hf_converted_llama_ckpt_and_tokenizer>. - [Alpaca Eval](https://github.com/tatsu-lab/alpaca_eval) for any inference only Llama experiments. -->

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests.

License

This project is licensed under the MIT License - see the LICENSE file for details. The license applies to the released data as well.

Contact

RAM is currently maintained by Olga Golovneva, Ilia Kulikov, Janice Lan, Xian Li, Richard Pang, Sainbayar Sukhbaatar, Tianlu Wang, Jason Weston, Jing Xu, Jane Dwivedi-Yu, Ping Yu, Weizhe Yuan. For any queries, please reach out to Jing Xu.