Home

Awesome

image

Large Language Model & GPT-4 Tech and Industry Resource Map

(大语言模型 & 多模态大模型 & 生成式预训练Transformer-N 技术与产业资源汇总)

1. Discussion Groups(讨论小组汇总)5. Model Resources(模型资源)
2. Tech Guide(技术入门)6. Application Open Source Projects(应用开源项目)
3. Investment Analysis(投资分析)7. Related Discussion(相关讨论)
4. Industry Analysis(产业分析)8. Web& Paper (网页论文资源)

This resource map is for my child and young developers that will face a AGI world in the future.

It has been created to help equip children and young developers with the knowledge and skills that will be necessary to navigate the rapidly-evolving world of AGI. As we continue to develop and rely on AI technologies, it is becoming increasingly important for younger generations to be prepared for the challenges and opportunities that lie ahead. In order to ensure that they are ready for this future, we have compiled a comprehensive list of resources and tools that will help them to understand the basics of AGI.

1. Discussion Groups(讨论小组汇总)

大模型算法技术大模型投资创业大模型应用方案大模型算力供需
AI大模型·GPT-4算法技术讨论群-2AI大模型·GPT-4创业投资讨论群-2AI大模型·GPT-4应用方案讨论群AI大模型·GPT-4算力供需交流群
http://c.nxw.so/cgpt 点入链接加群助手http://c.suo.nz/cinv 点入链接加群助手
GPGPU存算一体/存储器车规与域控芯片EDA大模型
GPGPU与先进GPU设计讨论群-2存算一体与存储器技术讨论群-2车规与域控制器芯片设计讨论群开源EDA与EDA大模型讨论群
DSAAI芯片与GPGPU编译器RISC-V
AI芯片与DSA设计讨论群AI芯片与GPGPU编译器讨论群RISC-V架构与设计讨论群

2. Tech Guide(技术入门)

2.1 GPT-4 Tech Report(GPT-4技术报告)

ClassificationArticle
陈巍:GPT-4核心技术分析报告(2)——GPT-4的技术分析(收录于GPT-4/ChatGPT技术与产业分析)
https://zhuanlan.zhihu.com/p/620087339
陈巍:GPT-4核心技术分析报告(5)——GPT-4的算力要点与芯片(收录于GPT-4/ChatGPT技术与产业分析)
https://zhuanlan.zhihu.com/p/611464068
陈巍谈芯:GPT-4大模型硬核解读 (收录于GPT-4/ChatGPT技术与产业分析)
https://mp.weixin.qq.com/s/nV2ynNtKmMNkADA8Wg4TVQ

2.2 ChatGPT Tech Report(ChatGPT技术报告)

ClassificationArticle
陈巍:ChatGPT发展历程、原理、技术架构和产业未来 (收录于GPT-4/ChatGPT技术与产业分析)
https://zhuanlan.zhihu.com/p/590655677
陈巍:ChatGPT报告:技术详解和产业未来(slide形式,替换了一些新的内容)
https://zhuanlan.zhihu.com/p/608917240
ChatGPT/InstructGPT详解
https://zhuanlan.zhihu.com/p/590311003
【强化学习 229】ChatGPT/InstructGPT
https://zhuanlan.zhihu.com/p/589827115
OpenAI的AGI语言智能演进之路:GPT1到ChatGPT
https://zhuanlan.zhihu.com/p/597263206

2.3 Large Language Model Technoglogy(大模型技术)

ClassificationArticle
Basic通向AGI之路:大型语言模型(LLM)技术精要
https://zhuanlan.zhihu.com/p/597586623
RLHF解读 ChatGPT 背后的技术重点:RLHF、IFT、CoT、红蓝对抗
https://zhuanlan.zhihu.com/p/602458131
Training(训练)为什么chatgpt的上下文连续对话能力得到了大幅度提升?
https://www.zhihu.com/question/575481512/answer/2852937178
PPO算法讲解Proximal Policy Optimization (PPO) Explained
https://towardsdatascience.com/proximal-policy-optimization-ppo-explained-abed1952457b
PPO算法讲解PARL框架下简单入门 Proximal Policy Optimization (PPO)
https://aistudio.baidu.com/aistudio/projectdetail/632270
TransformerTransformer 之功能概览
https://zhuanlan.zhihu.com/p/604444663
TransformerTransformer模型详解
https://zhuanlan.zhihu.com/p/338817680
PromptPrompt-based Language Models:模版增强语言模型小结
https://zhuanlan.zhihu.com/p/366771566
Chain of Thoughts有了Chain of Thought Prompting,大模型能做逻辑推理吗?
https://zhuanlan.zhihu.com/p/589087074
Knowledge BaseQuivr 基于OpenAI Embeddings构建本地知识库
https://zhuanlan.zhihu.com/p/631038668
GPT CacheGPTCache:通过缓存LLM查询成本降低 10 倍,速度提高 100 倍
https://zhuanlan.zhihu.com/p/645601760

2.4 Application Guide(应用指南)

ClassificationArticle
BasicLLMsPracticalGuide
https://github.com/Mooler0410/LLMsPracticalGuide
BaiscHuggingLLM
https://github.com/datawhalechina/hugging-llm
Prompt提示工程指南
https://www.promptingguide.ai/zh
Prompt面向开发者的 LLM 入门课程
https://github.com/datawhalechina/prompt-engineering-for-developers

3. Investment Analysis(投资分析)

ClassificationArticle
陈巍谈芯:GPT-4大模型硬核解读 (收录于GPT-4/ChatGPT技术与产业分析)
https://mp.weixin.qq.com/s/nV2ynNtKmMNkADA8Wg4TVQ
陈巍谈芯:ChatGPT发展历程、原理、技术架构和产业未来 (收录于先进AI技术深度解读)
https://zhuanlan.zhihu.com/p/590655677
ChatGPT带来的产业变革与投资机遇(九尾繁)
ChatGPT研究框架
https://github.com/chenweiphd/ChatGPT-Hub/blob/main/invest/ChatGPT%20research%20framwork-2023.pdf
从CHAT-GPT到生成式AI:人工智能新范式,重新定义生产力-2023-02-宏观大势
https://github.com/chenweiphd/LargeLanguageModel-and-GPT-4-Hub/blob/main/invest/%E4%BB%8ECHAT-GPT%E5%88%B0%E7%94%9F%E6%88%90%E5%BC%8FAI.pdf
势如破竹的ChatGPT:未来将推动芯片市场长期强劲增长
https://zhuanlan.zhihu.com/p/604194985

4. Industry Analysis(产业分析)

ClassificationArticle
ChatGPT的技术演进路线与应用展望
https://zhuanlan.zhihu.com/p/590380191
chatGPT 会取代人的哪些工作?哪些人群的职业规划需要转变?
https://www.zhihu.com/question/582809884/answer/2883146417
可怕!颠覆性新科技ChatGPT将令十类人失业
https://zhuanlan.zhihu.com/p/603655945

5. Model Resources(模型资源)

5.1 Foundation Model(基础模型)

5.1.1 Text Model(文本模型)

ModelDescription & Link
ChatGLM清华模型,针对中文问答和对话进行了优化
https://github.com/THUDM/ChatGLM-6B
ChatGLM2-6B在保留了初代模型对话流畅、部署门槛较低等众多优秀特性的基础之上,引入了GLM 的混合目标函数
https://github.com/THUDM/ChatGLM2-6B
Chinese-LLaMA-Alpaca中文LLaMA&Alpaca大语言模型
https://github.com/ymcui/Chinese-LLaMA-Alpaca
BELLE开源了基于BLOOMZ和LLaMA优化后的一系列模型,同时包括训练数据、相关模型、训练代码、应用场景等
https://github.com/LianjiaTech/BELLE
Luotuo-Chinese-LLM囊括了一系列中文大语言模型开源项目,包含了一系列基于已有开源模型(ChatGLM, MOSS, LLaMA)进行二次微调的语言模型,指令微调数据集等
https://github.com/LC1332/Luotuo-Chinese-LLM
Baichuan-7B由百川智能开发的一个开源可商用的大规模预训练语言模型
https://github.com/baichuan-inc/Baichuan-13B

5.1.2 Multimodal(多模态)

ModelDescription & Link
VisualGLM-6B开源的,支持图像、中文和英文的多模态对话语言模型,语言模型基于 ChatGLM-6B
https://github.com/THUDM/VisualGLM-6B
VisCPM开源的多模态大模型系列,支持中英双语的多模态对话能力(VisCPM-Chat模型)和文到图生成能力(VisCPM-Paint模型)
https://github.com/OpenBMB/VisCPM

5.2 Domain Model(垂域模型)

5.3 Dataset(数据集)

5.3.1 Pre-train Dataset(预训练数据集)

DatasetDescription & Link
MNBVC超大规模中文语料集,不但包括主流文化,也包括各个小众文化甚至火星文的数据
https://github.com/esbatmop/MNBVC
WuDaoCorporaText北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑大模型训练研究
https://data.baai.ac.cn/details/WuDaoCorporaText
CLUECorpus2020对Common Crawl的中文部分进行语料清洗,最终得到100GB的高质量中文预训练语料
https://github.com/CLUEbenchmark/CLUECorpus2020
ArgillaOpen-source data curation platform for LLMs,MLOps for NLP: from data labeling to model monitoring
https://github.com/argilla-io/argilla

5.3.2 Finetune Dataset(精调数据集)

DatasetDescription & Link
Alpaca-CoT统一了丰富的IFT数据
https://github.com/PhoebusSi/Alpaca-CoT
BELLE-data-1.5Mself-instruct生成,使用了中文种子任务
https://github.com/LianjiaTech/BELLE/tree/main/data/1.5M
Alpaca-GPT-4self-instruct生成,使用了中文种子任务
https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM

5.3.3 RLHF(人类反馈强化学习数据集)

DatasetDescription & Link
CValues数据规模为145k的价值对齐数据集
https://github.com/X-PLUG/CValues

5.4 Finetune

ItemDescription & Link
LLaMA Efficient Tuning基于PEFT的LLaMA微调框架
https://github.com/hiyouga/LLaMA-Efficient-Tuning
ChatGLM Efficient Tuning基于PEFT的高效ChatGLM微调
https://github.com/hiyouga/ChatGLM-Efficient-Tuning

5.5 Compression(压缩)

ItemDescription & Link
RPTQ4LLMRPTQ: Reorder-Based Post-Training Quantization for Large Language Models
https://github.com/hahnyuan/RPTQ4LLM

6. Application Open Source Projects(应用开源项目)

ClassificationArticle
GPT-neo
https://github.com/EleutherAI/gpt-neo
一大波 ChatGPT 开源项目,诞生了
https://zhuanlan.zhihu.com/p/590595246
Open-Assistant(还未完成)
https://github.com/LAION-AI/Open-Assistant
Awesome ChatGPT implementations
https://github.com/stars/acheong08/lists/awesome-chatgpt

7. Related Discussion(相关讨论)

ClassificationArticle
阻碍国内团队研究 ChatGPT 这样产品的障碍有哪些,技术,钱,还是领导力?
https://www.zhihu.com/question/570782945/answer/2795547780
ChatGPT 这个项目会开源吗?
https://www.zhihu.com/question/571390218/answer/2796908126
ChatGPT会取代搜索引擎吗
https://zhuanlan.zhihu.com/p/589533490
ChatGPT 有多高的技术壁垒?国内外除了 OpenAI 还有谁可以做到类似程度?
https://www.zhihu.com/question/581806122/answer/2880224101

8. Web& Paper (网页论文资源)

ChatGPT: Optimizing Language Models for Dialogue https://openai.com/blog/chatgpt/

GPT-1 论文:Improving Language Understanding by Generative Pre-Training https://link.zhihu.com/?target=https%3A//cdn.openai.com/research-covers/language-unsupervised/language\_understanding\_paper.pdf

GPT-2 论文:Language Models are Unsupervised Multitask Learners https://cdn.openai.com/better-language-models/language\_models\_are\_unsupervised\_multitask\_learners.pdf

GPT-3 论文:Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165

InstructGPT论文: Training language models to follow instructions with human feedback https://arxiv.org/abs/2203.02155

huggingface解读RHLF算法:Illustrating Reinforcement Learning from Human Feedback (RLHF) https://huggingface.co/blog/rlhf

RHLF算法论文:Augmenting Reinforcement Learning with Human Feedback https://www.cs.utexas.edu/\~ai-lab/pubs/ICML\_IL11-knox.pdf

TAMER框架论文:Interactively Shaping Agents via Human Reinforcement https://www.cs.utexas.edu/\~bradknox/papers/kcap09-knox.pdf

PPO算法: Proximal Policy Optimization Algorithms https://arxiv.org/abs/1707.06347

思维链: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models https://arxiv.org/pdf/2201.11903.pdf

Scaling Instruction-Finetuned Language Models https://arxiv.org/pdf/2210.11416.pdf

ChatGPT技术讨论小组 http://c.nxw.so/cgpt

Main Authors

CHEN Wei