Home

Awesome

<div align="center">

Infinity Embedding Serverless Worker

Deploy almost any Text Embedding and Reranker models with high throughput OpenAI-compatible Endpoints on RunPod Serverless, powered by the fastest embedding inference engine, built for serving - Infinity

</div>

Supported Models

When using torch backend, you can deploy any models supported by the sentence-transformers library.

This also means that you can deploy any model from the Massive Text Embedding Benchmark (MTEB) Leaderboard, which is currently the most popular and comprehensive leaderboard for embedding models.

Setting up the Serverless Endpoint

Option 1: Deploy any models directly from RunPod Console with Pre-Built Docker Image

[!NOTE]
We are adding a UI for deployment similar to Worker vLLM, but for now, you can manually create the endpoint with the regular serverless configurator.

We offer a pre-built Docker Image for the Infinity Embedding Serverless Worker that you can configure entirely with Environment Variables when creating the Endpoint:

1. Select Worker Image Version

You can directly use the following docker images and configure them via Environment Variables.

CUDA VersionStable (Latest Release)Development (Latest Commit)Note
11.8.0runpod/worker-infinity-embedding:stable-cuda11.8.0runpod/worker-infinity-embedding:dev-cuda11.8.0Available on all RunPod Workers without additional selection needed.
12.1.0runpod/worker-infinity-embedding:stable-cuda12.1.0runpod/worker-infinity-embedding:dev-cuda12.1.0When creating an Endpoint, select CUDA Version 12.4, 12.3, 12.2 and 12.1 in the filter. About 10% less total available machines than 11.8.0, but higher performance.

[NOTE] Latest image version (pre) runpod/worker-infinity-text-embedding:0.0.1-cuda12.1.0

2. Select your models and configure your deployment with Environment Variables

Option 2: Bake models into Docker Image

Coming soon!

Usage

There are two ways to use the endpoint - OpenAI Compatibility matching how you would use OpenAI API, and Standard Usage with the RunPod API. Note that reranking is only available with Standard Usage.

OpenAI Compatibility

Set up

  1. Install OpenAI Python SDK
pip install openai
  1. Initialize OpenAI client and set the API Key to your RunPod API Key, and base URL to https://api.runpod.ai/v2/YOUR_ENDPOINT_ID/openai/v1, where YOUR_ENDPOINT_ID is the ID of your endpoint, e.g. elftzf0lld1vw1
from openai import OpenAI

client = OpenAI(
  api_key=RUNPOD_API_KEY, 
  base_url="https://api.runpod.ai/v2/YOUR_ENDPOINT_ID/openai/v1"
)

Embedding

  1. Define the input

    You may embed a single text or a list of texts

    • Single Text
      embedding_input = "Hello, world!"
      
    • List of Texts
      embedding_input = ["Hello, world!", "This is a test."]
      
  2. Get the embeddings

    client.embeddings.create(
        model="YOUR_DEPLOYED_MODEL_NAME",
        input=embedding_input
    )
    

    Where YOUR_DEPLOYED_MODEL_NAME is the name of one of the models you deployed to the worker.

Standard Usage

Set up

You may use /run (asynchronous, start job and return job ID) or /runsync (synchronous, wait for job to finish and return result)

Embedding

Inputs:

Reranking

Inputs:

Acknowledgements

We'd like to thank Michael Feil for creating the Infinity Embedding Engine and actively being involved in the development of this worker!