Awesome
awesome-neural-reprogramming-acoustic-prompting
A curated list of awesome adversarial reprogramming and input prompting methods for neural networks since 2022.
News - Dr. Pin-Yu Chen and Huck will give a tutortial on adversarial reprogramming at ICASSP 2022. Video
How to empower frozen large-scale pre-trained models with reprogramming and prompting toward different applications is the next big challenge of Deep Learning. Welcome to commit and pull request!
<img src="https://github.com/huckiyang/awesome-neural-reprogramming-prompting/blob/main/repro-prompt-chh.png" width="300">Neural Adapters in Decoder, Encoder, and Inputs
Title | Authors | Code | Year |
---|---|---|---|
Differentially Private Adapters for Parameter Efficient Acoustic Modeling | C.-W. Ho et al. | code | Interspeech 2023 |
Parameter-Efficient Learning for Text-to-Speech Accent Adaptation | L.-J. Yang et al. | code | Interspeech 2023 |
A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model | S. Radhakrishnan et al. | code | Interspeech 2023 |
Neural Reprogramming or Adversarial Reprogramming or Offsite-Tuning
Input-Level Neural Model Prompting for Vision and Speech
Title | Authors | Code | Year |
---|---|---|---|
SPEECHPROMPT V2: PROMPT TUNING FOR SPEECH CLASSIFICATION TASKS | K.-W. Chang et al. | code | Arxiv 2023 |
Understanding and Improving Visual Prompting: A Label-Mapping Perspective | A. Chen et al. | - | Arxiv 2022 |
AudioLM: a Language Modeling Approach to Audio Generation | Z. Borsos et al. | - | Arxiv 2022 |
WAVPROMPT: Towards Few-Shot Spoken Language Understanding with Frozen Language Models | H. Gao et al. | - | Arxiv 2022 |
An Exploration of Prompt Tuning on Generative Spoken Language Model for Speech Processing Tasks | K.-W. Chang et al. | code | Interspeech 2022 |
Visual Prompting: Modifying Pixel Space to Adapt Pre-trained Models | H. Bahng et al. | - | Arxiv 2022 |
Theory
Title | Authors | Code | Year |
---|---|---|---|
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution | A. Kumar et al. | - | ICLR 2022 |