Home

Awesome

<div align="center">

PromptOptimizer

<img width="200" src="evaluations/artifacts/logo.png" alt="kevin inspired logo" />

Minimize LLM token complexity to save API costs and model computations.

</div> <div align="center">

lint test linkcheck License: MIT

Docs

</div>

Features

Why?

Prompt# TokensCorrect Response?
Who is the president of the United States of America?11
Who president US3 (-72%)

Installation

Quick Installation

pip install prompt-optimizer

Install from source

git clone https://github.com/vaibkumr/prompt-optimizer.git;
cd prompt-optimizer;
pip install -e .

Disclaimer

There is a compression vs performance tradeoff -- the increase in compression comes at the cost of loss in model performance. The tradeoff can be greatly mitigated by chosing the right optimize for a given task. There is no single optimizer for all cases. There is no Adam here.

Getting started


from prompt_optimizer.poptim import EntropyOptim

prompt = """The Belle Tout Lighthouse is a decommissioned lighthouse and British landmark located at Beachy Head, East Sussex, close to the town of Eastbourne."""
p_optimizer = EntropyOptim(verbose=True, p=0.1)
optimized_prompt = p_optimizer(prompt)
print(optimized_prompt)

Evaluations

Following are the results for logiqa OpenAI evals task. It is only performed for a subset of first 100 samples. Please note the optimizer performance over this task should not be generalized to other tasks, more thorough testing and domain knowledge is needed to choose the optimal optimizer.

Name% Tokens ReducedLogiQA AccuracyUSD Saved Per $100
Default0.00.320.0
Entropy_Optim_p_0.050.060.36.35
Entropy_Optim_p_0.10.110.2811.19
Entropy_Optim_p_0.250.260.2226.47
Entropy_Optim_p_0.50.50.0849.65
SynonymReplace_Optim_p_1.00.010.331.06
Lemmatizer_Optim0.010.331.01
NameReplace_Optim0.010.341.13
Punctuation_Optim0.130.3512.81
Autocorrect_Optim0.010.31.14
Pulp_Optim_p_0.050.050.315.49
Pulp_Optim_p_0.10.10.259.52

Cost-Performance Tradeoff

The reduction in cost often comes with a loss in LLM performance. Almost every optimizer have hyperparameters that control this tradeoff.

For example, in EntropyOptim the hyperparamter p, a floating point number between 0 and 1 controls the ratio of tokens to remove. p=1.0 corresponds to removing all tokens while p=0.0 corresponds to removing none.

The following chart shows the trade-off for different values of p as evaluated on the OpenAI evals logiqa task for a subset of first 100 samples.

<div align="center"> <img src="evaluations/artifacts/tradeoff.png" alt="tradeoff" /> </div>

Contributing

There are several directions to contribute to. Please see CONTRIBUTING.md for contribution guidelines and possible future directions.

Social

Contact us on twitter Vaibhav Kumar and Vaibhav Kumar.

Inspiration

<div align="center"> <img src="evaluations/artifacts/kevin.gif" alt="Image" /> </div>