Awesome
Machine Translation with LLMs Reading List
We greatly appreciate any contributions via PRs, issues, emails, or other methods.
This is a machine translation with large language models (LLMs) reading list maintained by Xing Wang and Zhiwei He.
- In-context Learning
- Assessment
- Chain-of-Thought Prompting
- Base Model Pre-training
- Translation Finetuning
- LLMs as Scorer
- Post-Editing
- Interpretability
- Decoding
- Language Models are Few-Shot Learners. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei. (NeurIPS 2020)
- Few-shot Learning with Multilingual Generative Language Models. Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. (EMNLP 2022)
- In-context Examples Selection for Machine Translation. Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad. (Findings of the ACL 2023)
- Prompting PaLM for Translation: Assessing Strategies and Performance. David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, George Foster. (ACL 2023)
- Prompting Large Language Model for Machine Translation: A Case Study. Biao Zhang, Barry Haddow, Alexandra Birch. (ICML 2023)
- Prompting Neural Machine Translation with Translation Memories. Abudurexiti Reheman, Tao Zhou, Yingfeng Luo, Di Yang, Tong Xiao, Jingbo Zhu. (AAAI 2023)
- Adaptive Machine Translation with Large Language Models. Yasmin Moslem, Rejwanul Haque, John D. Kelleher, Andy Way. (EAMT 2023) {code}
- The unreasonable effectiveness of few-shot learning for machine translation. Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Fangxiaoyu Feng, Melvin Johnson, Orhan Firat. (ICML 2023)
- Dictionary-based Phrase-level Prompting of Large Language Models for Machine Translation. Marjan Ghazvininejad, Hila Gonen, Luke Zettlemoyer. (arxiv 2023)
- RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation. Gabriele Sarti, Phu Mon Htut, Xing Niu, Benjamin Hsu, Anna Currey, Georgiana Dinu, Maria Nadejde. (ACL 2023)
- Instruction Position Matters in Sequence Generation with Large Language Models. Yijin Liu, Xianfeng Zeng, Fandong Meng, Jie Zhou. (arxiv 2023) {code}
- Improving Translation Faithfulness of Large Language Models via Augmenting Instructions. Yijie Chen, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie Zhou. (arxiv 2023) {code}
- Neural Machine Translation Models Can Learn to be Few-shot Learners. Raphael Reinauer, Patrick Simianer, Kaden Uhlig, Johannes E. M. Mosig, Joern Wuebker. (arxiv 2023)
- Towards Effective Disambiguation for Machine Translation with Large Language Models. Vivek Iyer, Pinzhen Chen, Alexandra Birch. (arxiv 2023)
- Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning. Duarte M. Alves, Nuno M. Guerreiro, João Alves, José Pombal, Ricardo Rei, José G. C. de Souza, Pierre Colombo, André F. T. Martins. (Findings of EMNLP 2023) {code}
- Dissecting In-Context Learning of Translations in GPTs. Vikas Raunak, Hany Hassan Awadalla, Arul Menezes. (Findings of EMNLP 2023)
- Narrowing the Gap between Zero- and Few-shot Machine Translation by Matching Styles. Weiting Tan, Haoran Xu, Lingfeng Shen, Shuyue Stella Li, Kenton Murray, Philipp Koehn, Benjamin Van Durme, Yunmo Chen. (arxiv 2023)
- Anti-LM Decoding for Zero-shot In-context Machine Translation. Suzanna Sia, Alexandra DeLucia, Kevin Duh. (arxiv 2023)
- MT2: Towards a Multi-Task Machine Translation Model with Translation-Specific In-Context Learning. Chunyou Li, Mingtong Liu, Hongxiao Zhang, Yufeng Chen, Jinan Xu, Ming Zhou. (EMNLP 2023)
- Towards Robust In-Context Learning for Machine Translation with Large Language Models. Shaolin Zhu, Menglong Cui, Deyi Xiong. (LREC-COLING 2024)
- Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine. Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, Zhaopeng Tu. (arxiv 2023) {code}
- A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung. (arxiv 2023) {code}
- How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation. Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, Hany Hassan Awadalla. (arxiv 2023) {code}
- Document-Level Machine Translation with Large Language Models. Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, Zhaopeng Tu. (arxiv 2023) {code}
- Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis. Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Shujian Huang, Lingpeng Kong, Jiajun Chen, Lei Li. (arxiv 2023) {code}
- How to Design Translation Prompts for ChatGPT: An Empirical Study. Yuan Gao, Ruili Wang, Feng Hou. (arxiv 2023)
- Investigating the Translation Performance of a Large Multilingual Language Model: the Case of BLOOM. Rachel Bawden, François Yvon. (EAMT 2023) {code}
- Large language models effectively leverage document-level context for literary translation, but critical errors persist. Marzena Karpinska, Mohit Iyyer. (arxiv 2023) {code}
- Do GPTs Produce Less Literal Translations?. Vikas Raunak, Arul Menezes, Matt Post, Hany Hassan Awadalla. (ACL 2023)
- Zeno GPT-MT Report. Graham Neubig. (github 2023) {code}
- ChatGPT MT: Competitive for High- (but not Low-) Resource Languages. Nathaniel R. Robinson, Perez Ogayo, David R. Mortensen, Graham Neubig. (WMT 2023)
- Exploring Human-Like Translation Strategy with Large Language Models. Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, Xing Wang. (TACL 2024) {code}
- Towards Making the Most of ChatGPT for Machine Translation.Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, Dacheng Tao. (Findings of EMNLP 2023) {code}
- Chain-of-Dictionary Prompting Elicits Translation in Large Language Models. Hongyuan Lu, Haoyang Huang, Dongdong Zhang, Haoran Yang, Wai Lam, Furu Wei. (arxiv 2023)
- Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi. (arxiv 2023) {code}
- Aligning Translation-Specific Understanding to General Understanding in Large Language Models. Yichong Huang, Xiaocheng Feng, Baohang Li, Chengpeng Fu, Wenshuai Huo, Ting Liu, Bing Qin (arxiv 2024)
- Cross-Lingual Supervision improves Large Language Models Pre-training. Andrea Schioppa, Xavier Garcia, Orhan Firat. (arxiv 2023)
- InternLM: A Multilingual Language Model with Progressively Enhanced Capabilitiese. InternLM Team. (github 2023) {code}
- PolyLM: An Open Source Polyglot Large Language Model. Xiangpeng Wei, Haoran Wei, Huan Lin, Tianhao Li, Pei Zhang, Xingzhang Ren, Mei Li, Yu Wan, Zhiwei Cao, Binbin Xie, Tianxiang Hu, Shangjie Li, Binyuan Hui, Bowen Yu, Dayiheng Liu, Baosong Yang, Fei Huang, Jun Xie. (arxiv 2023) {code}
- OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch. Juntao Li, Zecheng Tang, Yuyang Ding, Pinzheng Wang, Pei Guo, Wangjie You, Dan Qiao, Wenliang Chen, Guohong Fu, Qiaoming Zhu, Guodong Zhou, Min Zhang. (arxiv 2023) {code}
- Tuning LLMs with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages. Zhuoyuan Mao, Yen Yu (arxiv 2024)
- A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models. Haoran Xu, Young Jin Kim, Amr Sharaf, Hany Hassan Awadalla (ICLR 2024) {code}
- ParroT: Translating During Chat Using Large Language Models. Wenxiang Jiao, Jen-tse Huang, Wenxuan Wang, Zhiwei He, Tian Liang, Xing Wang, Shuming Shi, Zhaopeng Tu. (Findings of EMNLP 2023) {code}
- Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions. Jiahuan Li, Hao Zhou, Shujian Huang, Shanbo Chen, Jiajun Chen. (arxiv 2023)
- BigTrans: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages. Wen Yang, Chong Li, Jiajun Zhang, Chengqing Zong. (arxiv 2023) {code}
- BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models. Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, Yang Feng. (arxiv 2023) {code}
- TIM: Teaching Large Language Models to Translate with Comparison. Jiali Zeng, Fandong Meng, Yongjing Yin, Jie Zhou. (arxiv 2023) {code}
- Extrapolating Large Language Models to Non-English by Aligning Languages. Wenhao Zhu, Yunzhe Lv, Qingxiu Dong, Fei Yuan, Jingjing Xu, Shujian Huang, Lingpeng Kong, Jiajun Chen, Lei Li. (arxiv 2023) {code}
- A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models. Haoran Xu, Young Jin Kim, Amr Sharaf, Hany Hassan Awadalla. (arxiv 2023) {code}
- Towards Effective Disambiguation for Machine Translation with Large Language Models. Vivek Iyer, Pinzhen Chen, Alexandra Birch. (arxiv 2023)
- Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning. Duarte M. Alves, Nuno M. Guerreiro, João Alves, José Pombal, Ricardo Rei, José G. C. de Souza, Pierre Colombo, André F. T. Martins. (Findings of EMNLP 2023) {code}
- Domain-Specific Text Generation for Machine Translation. Yasmin Moslem, Rejwanul Haque, John D. Kelleher, Andy Way. (AMTA 2022)
- Fine-tuning Large Language Models for Adaptive Machine Translation. Yasmin Moslem, Rejwanul Haque, Andy Way. (arxiv 2023)
- Adapting Large Language Models for Document-Level Machine Translation. Minghao Wu, Thuy-Trang Vu, Lizhen Qu, George Foster, Gholamreza Haffari. (arxiv 2023)
- Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation. Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, Young Jin Kim. (arxiv 2024) {code}
- GPTScore: Evaluate as You Desire. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, Pengfei Liu. (arxiv 2023) {code}
- Large Language Models Are State-of-the-Art Evaluators of Translation Quality. Tom Kocmi, Christian Federmann (EAMT 2023) {code}
- Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT. Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, Dacheng Tao. (arxiv 2023) {code}
- INSTRUCTSCORE: Towards Explainable Text Generation Evaluation with Automatic Feedback. Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Yang Wang, Lei Li. (EMNLP 2023) {code}
- Towards Explainable Evaluation Metrics for Machine Translation. Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger. (arxiv 2023)
- The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation. Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André F. T. Martins, Graham Neubig, Ankush Garg, Jonathan H. Clark, Markus Freitag, Orhan Firat. (WMT 2023)
- Towards Multiple References Era -- Addressing Data Leakage and Limited Reference Diversity in NLG Evaluation. Xianfeng Zeng, Yijin Liu, Fandong Meng, Jie Zhou. (arxiv 2023) {code}
- Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation. Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, Shujian Huang. (arxiv 2024) {code}
- Leveraging GPT-4 for Automatic Translation Post-Editing. Vikas Raunak, Amr Sharaf, Hany Hassan Awadallah, Arul Menezes. (arxiv 2023)
- Iterative Translation Refinement with Large Language Models. Pinzhen Chen, Zhicheng Guo, Barry Haddow, Kenneth Heafield. (arxiv 2023)
- Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi. (arxiv 2023) {code}
- Contextual Refinement of Translations: Large Language Models for Sentence and Document-Level Post-Editing. Sai Koneru, Miriam Exel, Matthias Huck, Jan Niehues. (arxiv 2023)
- SCALE: Synergized Collaboration of Asymmetric Language Translation Engines. Xin Cheng, Xun Wang, Tao Ge, Si-Qing Chen, Furu Wei, Dongyan Zhao, Rui Yan. (arxiv 2023)
- Domain Terminology Integration into Machine Translation: Leveraging Large Language Models. Yasmin Moslem, Gianfranco Romani, Mahdi Molaei, Rejwanul Haque, John D. Kelleher, Andy Way. (WMT 2023)
- Improving Machine Translation with Large Language Models: A Preliminary Study with Cooperative Decoding. Jiali Zeng, Fandong Meng, Yongjing Yin, Jie Zhou. (arxiv 2023)
- Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM's Translation Capability. Eleftheria Briakou, Colin Cherry, George Foster. (ACL 2023)
- Improving Machine Translation with Large Language Models: A Preliminary Study with Cooperative Decoding. Jiali Zeng, Fandong Meng, Yongjing Yin, Jie Zhou. (arxiv 2023)
- On-the-Fly Fusion of Large Language Models and Machine Translation. Hieu Hoang, Huda Khayrallah, Marcin Junczys-Dowmunt. (arxiv 2023)