Awesome
<!-- <h4 align="center"> <img alt="AdalFlow logo" src="docs/source/_static/images/adalflow-logo.png" style="width: 100%;"> </h4> --> <h4 align="center"> <img alt="AdalFlow logo" src="https://raw.githubusercontent.com/SylphAI-Inc/LightRAG/main/docs/source/_static/images/adalflow-logo.png" style="width: 100%;"> </h4> <h2> <p align="center"> β‘ The Library to Build and Auto-optimize LLM Applications β‘ </p> </h2> <p align="center"> <a href="https://colab.research.google.com/drive/1TKw_JHE42Z_AWo8UuRYZCO2iuMgyslTZ?usp=sharing"> <img alt="Try Quickstart in Colab" src="https://colab.research.google.com/assets/colab-badge.svg"> </a> </p> <h4 align="center"> <p> <a href="https://adalflow.sylph.ai/">All Documentation</a> | <a href="https://adalflow.sylph.ai/apis/components/components.model_client.html">Models</a> | <a href="https://adalflow.sylph.ai/apis/components/components.retriever.html">Retrievers</a> | <a href="https://adalflow.sylph.ai/apis/components/components.agent.html">Agents</a> | <a href="https://adalflow.sylph.ai/tutorials/evaluation.html"> LLM evaluation</a> | <a href="https://adalflow.sylph.ai/use_cases/question_answering.html">Trainer & Optimizers</a> <p> </h4> <p align="center"> <a href="https://pypi.org/project/adalflow/"> <img alt="PyPI Version" src="https://img.shields.io/pypi/v/adalflow?style=flat-square"> </a> <a href="https://pypi.org/project/adalflow/"> <img alt="PyPI Downloads" src="https://static.pepy.tech/badge/adalflow"> </a> <a href="https://pypi.org/project/adalflow/"> <img alt="PyPI Downloads" src="https://static.pepy.tech/badge/adalflow/month"> </a> <a href="https://star-history.com/#SylphAI-Inc/AdalFlow"> <img alt="GitHub stars" src="https://img.shields.io/github/stars/SylphAI-Inc/AdalFlow?style=flat-square"> </a> <a href="https://github.com/SylphAI-Inc/AdalFlow/issues"> <img alt="Open Issues" src="https://img.shields.io/github/issues-raw/SylphAI-Inc/AdalFlow?style=flat-square"> </a> <a href="https://opensource.org/license/MIT"> <img alt="License" src="https://img.shields.io/github/license/SylphAI-Inc/AdalFlow"> </a> <a href="https://discord.gg/ezzszrRZvT"> <img alt="discord-invite" src="https://dcbadge.vercel.app/api/server/ezzszrRZvT?style=flat"> </a> </p> <h4> <p align="center"> For AI researchers, product teams, and software engineers who want to learn the AI way. </p> </h4> <!-- <a href="https://colab.research.google.com/drive/1PPxYEBa6eu__LquGoFFJZkhYgWVYE6kh?usp=sharing"> <img alt="Try Quickstart in Colab" src="https://colab.research.google.com/assets/colab-badge.svg"> </a> --> <!-- <a href="https://pypistats.org/packages/lightrag"> <img alt="PyPI Downloads" src="https://img.shields.io/pypi/dm/lightRAG?style=flat-square"> </a> -->Quick Start
Install AdalFlow with pip:
pip install adalflow
Please refer to the full installation guide for more details.
- Try the Building Quickstart in Colab to see how AdalFlow can build the task pipeline, including Chatbot, RAG, agent, and structured output.
- Try the Optimization Quickstart to see how AdalFlow can optimize the task pipeline.
Why AdalFlow
- Embracing a design pattern similar to PyTorch, AdalFlow is powerful, light, modular, and robust.
AdalFlow provides
Model-agnostic
building blocks to build LLM task pipelines, ranging from RAG, Agents to classical NLP tasks like text classification and named entity recognition. It is easy to get high performance only using manual prompting. - AdalFlow provides a unified auto-differentiative framework for both zero-shot prompt optimization and few-shot optimization. It advances existing auto-optimization research, including
Text-Grad
andDsPy
. Through our research,Text-Grad 2.0
andLearn-to-Reason Few-shot In Context Learning
, AdalFlowTrainer
achieves the highest accuracy while being the most token-efficient.
Here is an optimization demonstration on a text classification task:
<!-- <p align="center"> <img src="docs/source/_static/images/classification_training_map.png" alt="AdalFlow Auto-optimization" style="width: 80%;"> </p> <p align="center"> <img src="docs/source/_static/images/classification_opt_prompt.png" alt="AdalFlow Auto-optimization" style="width: 80%;"> </p> --> <p align="center" style="background-color: #f0f0f0;"> <img src="https://raw.githubusercontent.com/SylphAI-Inc/LightRAG/main/docs/source/_static/images/classification_training_map.png" style="width: 80%;" alt="AdalFlow Auto-optimization"> </p> <p align="center" style="background-color: #f0f0f0;"> <img src="https://raw.githubusercontent.com/SylphAI-Inc/LightRAG/main/docs/source/_static/images/classification_opt_prompt.png" alt="AdalFlow Optimized Prompt" style="width: 80%;"> </p>Among all libraries, AdalFlow achieved the highest accuracy with manual prompting (starting at 82%) and the highest accuracy after optimization.
Further reading: Optimize Classification
Light, Modular, and Model-Agnostic Task Pipeline
LLMs are like water; AdalFlow help you quickly shape them into any applications, from GenAI applications such as chatbots, translation, summarization, code generation, RAG, and autonomous agents to classical NLP tasks like text classification and named entity recognition.
AdalFlow has two fundamental, but powerful, base classes: Component
for the pipeline and DataClass
for data interaction with LLMs.
The result is a library with minimal abstraction, providing developers with maximum customizability.
You have full control over the prompt template, the model you use, and the output parsing for your task pipeline.
<p align="center"> <img src="https://raw.githubusercontent.com/SylphAI-Inc/LightRAG/main/docs/source/_static/images/AdalFlow_task_pipeline.png" alt="AdalFlow Task Pipeline"> </p> <!-- LLMs are like water; they can be shaped into anything, from GenAI applications such as chatbots, translation, summarization, code generation, and autonomous agents to classical NLP tasks like text classification and named entity recognition. They interact with the world beyond the modelβs internal knowledge via retrievers, memory, and tools (function calls). Each use case is unique in its data, business logic, and user experience. Because of this, no library can provide out-of-the-box solutions. Users must build towards their own use case. This requires the library to be modular, robust, and have a clean, readable codebase. The only code you should put into production is code you either 100% trust or are 100% clear about how to customize and iterate. --> <!-- This is what AdalFlow is: light, modular, and robust, with a 100% readable codebase. -->Further reading: How We Started, <!-- [Introduction](https://adalflow.sylph.ai/), -->Design Philosophy and Class hierarchy.
<!-- **PyTorch** ```python import torch import torch.nn as nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout2d(0.25) self.dropout2 = nn.Dropout2d(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = self.dropout1(x) x = self.dropout2(x) x = self.fc1(x) return self.fc2(x) ``` -->Unified Framework for Auto-Optimization
AdalFlow provides token-efficient and high-performing prompt optimization within a unified framework.
To optimize your pipeline, simply define a Parameter
and pass it to AdalFlow's Generator
.
Whether you need to optimize task instructions or run some few-shot demonstrations,
AdalFlow's unified framework offers an easy way to diagnose, visualize, debug, and train your pipeline.
This Dynamic Computation Graph demonstrates how our auto-differentiation and the dynamic computation graph work.
No need to manually defined nodes and edges, AdalFlow will automatically trace the computation graph for you.
Trainable Task Pipeline
Just define it as a Parameter
and pass it to AdalFlow's Generator
.
AdalComponent & Trainer
AdalComponent
acts as the 'interpreter' between task pipeline and the trainer, defining training and validation steps, optimizers, evaluators, loss functions, backward engine for textual gradients or tracing the demonstrations, the teacher generator.
Documentation
AdalFlow full documentation available at adalflow.sylph.ai:
- How We Started
- Introduction
- Full installation guide
- Design philosophy
- Class hierarchy
- Tutorials
- Supported Models
- Supported Retrievers
- API reference
AdalFlow: A Tribute to Ada Lovelace
AdalFlow is named in honor of Ada Lovelace, the pioneering female mathematician who first recognized that machines could go beyond mere calculations. As a team led by a female founder, we aim to inspire more women to pursue careers in AI.
Community & Contributors
The AdalFlow is a community-driven project, and we welcome everyone to join us in building the future of LLM applications.
Join our Discord community to ask questions, share your projects, and get updates on AdalFlow.
To contribute, please read our Contributor Guide.
Contributors
Acknowledgements
Many existing works greatly inspired AdalFlow library! Here is a non-exhaustive list:
- π PyTorch for design philosophy and design pattern of
Component
,Parameter
,Sequential
. - π Micrograd: A tiny autograd engine for our auto-differentiative architecture.
- π Text-Grad for the
Textual Gradient Descent
text optimizer. - π DSPy for inspiring the
__{input/output}__fields
in ourDataClass
and the bootstrap few-shot optimizer. - π OPRO for adding past text instructions along with its accuracy in the text optimizer.
- π PyTorch Lightning for the
AdalComponent
andTrainer
.
Citation
@software{Yin2024AdalFlow,
author = {Li Yin},
title = {{AdalFlow: The Library for Large Language Model (LLM) Applications}},
month = {7},
year = {2024},
doi = {10.5281/zenodo.12639531},
url = {https://github.com/SylphAI-Inc/AdalFlow}
}
<!-- # Star History
[![Star History Chart](https://api.star-history.com/svg?repos=SylphAI-Inc/AdalFlow&type=Date)](https://star-history.com/#SylphAI-Inc/AdalFlow&Date) -->
<!--
<a href="https://trendshift.io/repositories/11559" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11559" alt="SylphAI-Inc%2FAdalFlow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> -->