Awesome
<div align="center"> <img src="https://github.com/zjunlp/PromptKG/blob/main/resources/logo.svg" width="350px">PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
</div>Directory | Description |
---|---|
research | • A collection of prompt learning-related research model implementations |
lambdaKG | • A library for PLM-based KG embeddings and applications |
deltaKG | • A library for dynamically editing PLM-based KG embeddings |
tutorial-notebooks | • Tutorial notebooks for beginners |
Table of Contents
Tutorials
- Zero- and Few-Shot NLP with Pretrained Language Models. AACL 2022 Tutorial [ppt]
- Data-Efficient Knowledge Graph Construction. CCKS2022 Tutorial [ppt]
- Efficient and Robuts Knowledge Graph Construction. AACL-IJCNLP Tutorial [ppt]
- Knowledge Informed Prompt Learning. MLNLP 2022 Tutorial (Chinese) [ppt]
Surveys
- Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models (on arxiv 2021) [paper]
- Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing (ACM Computing Surveys 2021) [paper]
- reStructured Pre-training (on arxiv 2022) [paper]
- A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models (on arxiv 2022) [paper]
- A Survey of Knowledge-Enhanced Pre-trained Language Models (on arxiv 2022) [paper]
- A Review on Language Models as Knowledge Bases (on arxiv 2022) [paper]
- Generative Knowledge Graph Construction: A Review (EMNLP, 2022) [paper]
- Reasoning with Language Model Prompting: A Survey (on arxiv 2022) [paper]
- Reasoning over Different Types of Knowledge Graphs: Static, Temporal and Multi-Modal (on arxiv 2022) [paper]
- The Life Cycle of Knowledge in Big Language Models: A Survey (on arxiv 2022) [paper]
- Unifying Large Language Models and Knowledge Graphs: A Roadmap (on arxiv 2023) [paper]
Papers
Knowledge as Prompt
Language Understanding
- Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, in NeurIPS 2020. [pdf]
- REALM: Retrieval-Augmented Language Model Pre-Training, in ICML 2020. [pdf]
- Making Pre-trained Language Models Better Few-shot Learners, in ACL 2022. [pdf]
- PTR: Prompt Tuning with Rules for Text Classification, in OpenAI 2022. [pdf]
- Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction, in EMNLP 2021. [pdf]
- RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction, in EMNLP 2022 (Findings). [pdf]
- Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification, in ACL 2022. [pdf]
- PPT: Pre-trained Prompt Tuning for Few-shot Learning, in ACL 2022. [pdf]
- Contrastive Demonstration Tuning for Pre-trained Language Models, in EMNLP 2022 (Findings). [pdf]
- AdaPrompt: Adaptive Model Training for Prompt-based NLP, in arxiv 2022. [pdf]
- KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction, in WWW 2022. [pdf]
- Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph Construction, in SIGIR 2023. [pdf]
- Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning, in NeurIPS 2022. [pdf]
- Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning, in SIGIR 2022. [pdf]
- LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable Prompting, in COLING 2022. [pdf]
- Unified Structure Generation for Universal Information Extraction, in ACL 2022. [pdf]
- LasUIE: Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language Model, in NeurIPS 2022. [pdf]
- Atlas: Few-shot Learning with Retrieval Augmented Language Models, in Arxiv 2022. [pdf]
- Don't Prompt, Search! Mining-based Zero-Shot Learning with Language Models, in ACL 2022. [pdf]
- Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding, in EMNLP 2022. [pdf]
- Unified Knowledge Prompt Pre-training for Customer Service Dialogues, in CIKM 2022. [pdf]
- Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding, in EMNLP 2022. [pdf]
- SELF-INSTRUCT: Aligning Language Model with Self Generated Instructions, in arxiv 2022. [pdf]
- One Embedder, Any Task: Instruction-Finetuned Text Embeddings, in arxiv 2022. [pdf]
- Learning To Retrieve Prompts for In-Context Learning, in NAACL 2022. [pdf]
- Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data, in ACL 2022. [pdf]
- One Model for All Domains: Collaborative Domain-Prefix Tuning for Cross-Domain NER, in Arxiv 2023. [pdf]
- REPLUG: Retrieval-Augmented Black-Box Language Models, in Arxiv 2023. [pdf]
- Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering, in Arxiv 2023. [pdf]
Multimodal
- Good Visual Guidance Makes A Better Extractor: Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction, in NAACL 2022 (Findings). [pdf]
- Visual Prompt Tuning, in ECCV 2022. [pdf]
- CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models, in EMNLP 2022. [pdf]
- Learning to Prompt for Vision-Language Models, in IJCV 2022. [pdf]
- Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models, in NeurIPS 2022. [pdf]
Advanced Tasks
- Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5), in ACM RecSys 2022. [pdf]
- Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning, in KDD 2022. [pdf]
- PromptEM: Prompt-tuning for Low-resource Generalized Entity Matching, in VLDB 2023. [pdf]
- VIMA: General Robot Manipulation with Multimodal Prompts, in Arxiv 2022. [pdf]
- Unbiasing Retrosynthesis Language Models with Disconnection Prompts, in Arxiv 2022. [pdf]
- ProgPrompt: Generating Situated Robot Task Plans using Large Language Models, in Arxiv 2022. [pdf]
- Collaborating with language models for embodied reasoning, in NeurIPS 2022 Workshop LaReL. [pdf]
Prompt (PLMs) for Knowledge
Knowledge Probing
- How Much Knowledge Can You Pack Into the Parameters of a Language Model? in EMNLP 2020. [pdf]
- Language Models as Knowledge Bases? in EMNLP 2019. [pdf]
- Materialized Knowledge Bases from Commonsense Transformers, in CSRR 2022. [pdf]
- Time-Aware Language Models as Temporal Knowledge Bases, in TACL2022. [pdf]
- Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA? in ACL2021. [pdf]
- Language models as knowledge bases: On entity representations, storage capacity, and paraphrased queries, in EACL2021. [pdf]
- Scientific language models for biomedical knowledge base completion: an empirical study, in AKBC 2021. [pdf]
- Multilingual LAMA: Investigating knowledge in multilingual pretrained language models, in EACL2021. [pdf]
- How Can We Know What Language Models Know ? in TACL 2020. [pdf]
- How Context Affects Language Models' Factual Predictions, in AKBC 2020. [pdf]
- COPEN: Probing Conceptual Knowledge in Pre-trained Language Models, in EMNLP 2022. [pdf]
- Probing Simile Knowledge from Pre-trained Language Models, in ACL 2022. [pdf]
Knowledge Graph Embedding (We provide a library and benchmark lambdaKG)
- KG-BERT: BERT for knowledge graph completion, in Arxiv 2020. [pdf]
- Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models, in Coling 2020. [pdf]
- Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion, in WWW 2021. [pdf]
- KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation, TACL 2021 [pdf]
- StATIK: Structure and Text for Inductive Knowledge Graph, in NAACL 2022. [pdf]
- Joint Language Semantic and Structure Embedding for Knowledge Graph Completion, in COLING. [pdf]
- Knowledge Is Flat: A Seq2Seq Generative Framework for Various Knowledge Graph Completion, in COLING. [pdf]
- Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach, in ACL 2022. [pdf]
- Language Models as Knowledge Embeddings, in IJCAI 2022. [pdf]
- From Discrimination to Generation: Knowledge Graph Completion with Generative Transformer, in WWW 2022. [pdf]
- Reasoning Through Memorization: Nearest Neighbor Knowledge Graph Embeddings, in Arxiv 2022. [pdf]
- SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models, in ACL 2022. [pdf]
- Sequence to Sequence Knowledge Graph Completion and Question Answering, in ACL 2022. [pdf]
- LP-BERT: Multi-task Pre-training Knowledge Graph BERT for Link Prediction, in Arxiv 2022. [pdf]
- Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries, in KDD 2022. [pdf]
- Knowledge Is Flat: A Seq2Seq Generative framework For Various Knowledge Graph Completion, in Coling 2022. [pdf]
Analysis
- Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases, in ACL 2021. [pdf]
- Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View, in ACL 2022. [pdf]
- How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis, in ACl 2022. [pdf]
- Emergent Abilities of Large Language Models, in Arxiv 2022. [pdf]
- Knowledge Neurons in Pretrained Transformers, in ACL 2022. [pdf]
- Finding Skill Neurons in Pre-trained Transformer-based Language Models, in EMNLP 2022. [pdf]
- Do Prompts Solve NLP Tasks Using Natural Languages? in Arxiv 2022. [pdf]
- Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? in EMNLP 2022. [pdf]
- Do Prompt-Based Models Really Understand the Meaning of their Prompts? in NAACL 2022. [pdf]
- When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories, in arxiv 2022. [pdf]
- Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers,in arxiv 2022. [pdf]
- Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity, in ACL 2022. [pdf]
- Editing Large Language Models: Problems, Methods, and Opportunities, in arxiv 2023. [pdf]
Contact Information
For help or issues using the tookits, please submit a GitHub issue.