Home

Awesome

Personal LLM Agents - Survey

This repo maintains a curated list of papers related to Personal LLM Agents. For more details, please refer to our paper or join our discussion group:

Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security

Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, Yunxin Liu

[arxiv] [pdf] [cite] [discuss (zulip)]

Personal LLM Agents are defined as a special type of LLM-based agents that are deeply integrated with personal data, personal devices, and personal services. They are perferably deployed to resource-constrained mobile/edge devices and/or powered by lightweight AI models. The main purpose of personal LLM agents is to assist end-users and augment their abilities, helping them to focus more and do better on interesting and important affairs.

This paper list covers several main aspects of Personal LLM Agents, including the capabilities, efficiency and security. Table of content:

Key Capabilities of Personal LLM Agents

Task Automation

Task automation is a core capability of personal LLM agents, which determines how well the agent can respond to user commands and/or automatically execute tasks for the user.

We focus on UI-based task automation agents in this list due to their popularity and close relevance to personal devices.

UI-grounded Agents for Task Automation

LLM-based Approaches

Traditional Approaches

Benchmarks of UI Automation

Sensing

The ability to understand the current context is crucial for Personal LLM Agents to offer personalized, context-aware services. This include the techniques to sense the user activity, mental status, environment dynamics, etc.

LLM-based Approaches

Traditional Approaches

<!-- #### By different sensing targets --> <!-- #### Sensing the User E.g. user activities, mental status, etc. -->

Memorization

Memorization is about the ability of Personal LLM Agents to maintain information about the user, so that the agents can provide more customized services and evolve themselves according to user preferences.

Memory Obtaining

Memory Management

Agent Self-evolution

Efficiency of LLM Agents

The efficiency of LLM agents is closely related to the efficiency of LLM inference, LLM training/customization, and memory management.

Efficient LLM Inference & Training

LLM inference/training efficiency has been comprehensively summarized in existing surveys (e.g. this link). Therefore, we omit this part in this list.

Efficient Memory Retrieval & Management

Here we mainly list the papers related to the efficiency memory management, an important component of LLM-based agents.

Organizing the Memory

(with vector library, vector DB, and others)

Vector Library

Vector Database

Other Forms of Memory

Optimizing the Efficiency of Memory

Searching Design

Searching Execution

Efficient Indexing

Security & Privacy of Personal LLM Agents

Security & Privacy of AI/ML is a huge area with lots of related papers. Here we only focus on the ones related to LLM and LLM agents.

Confidentiality (of User Data)

Integrity (of Agent Behavior)

Adversarial Attacks

Backdoor Attacks

Prompt Injection Attacks

Reliability (of Agent Decisions)

Problems

Improvement

Inspection

Acknowledgment

We sincerely thank the valuable feedback from many domain experts including Xiaobo Peng (Autohome), Ligeng Chen (Honor Device), Miao Wei, Pengpeng He (Huawei), Hansheng Hong, Wenjun Chen, Zhiyao Yang (Oppo), Xuesheng Qi (vivo), Liang Tao, Lishun Sun, Shuang Dong (Xiaomi), and the anonymous others.

Citation

@article{li2024personal_llm_agents,
      title={Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security}, 
      author={Yuanchun Li and Hao Wen and Weijun Wang and Xiangyu Li and Yizhen Yuan and Guohong Liu and Jiacheng Liu and Wenxing Xu and Xiang Wang and Yi Sun and Rui Kong and Yile Wang and Hanfei Geng and Jian Luan and Xuefeng Jin and Zilong Ye and Guanjing Xiong and Fan Zhang and Xiang Li and Mengwei Xu and Zhijun Li and Peng Li and Yang Liu and Ya-Qin Zhang and Yunxin Liu},
      year={2024},
      journal={arXiv preprint arXiv:2401.05459}
}