Home

Awesome

Colivara Evaluation Project

Evaluation Results

This repository contains a comprehensive evaluation of the Colivara API for document management, search, and retrieval, using a Retrieval-Augmented Generation (RAG) model. This evaluation aims to assess Colivara's capabilities in managing document collections, performing efficient search operations, and calculating relevance metrics to measure performance.

BenchmarkColivaravidore_colqwen2-v1.0 (Current Leader)OCR + BM25 (chunk/embed pipeline)Jina-CLIP (Contrastive VLM)
ArxivQ88.188.131.625.4
DocQ56.160.636.811.9
InfoQ91.492.662.935.5
TabF86.389.546.520.2
TATQ71.781.462.73.3
Shift91.390.764.33.8
AI99.599.492.815.2
Energy96.395.985.919.7
Gov.96.796.383.921.4
Health.98.398.187.220.8
Avg.87.689.365.517.7

Evaluation Results

Table of Contents


Project Overview

The goal of this project is to evaluate Colivara’s document retrieval and management features, particularly for applications that rely on high-performance data search and retrieval. This includes testing Colivara's collection and document management, assessing its suitability for various search and retrieval scenarios, and benchmarking the platform with a RAG model to evaluate relevance based on real-world queries.

Evaluation Results

Below are the summarized evaluation results for the Colivara API performance based on NDCG metrics:

Release 1.5.0 (hierarchical clustering) - latest

BenchmarkColivara ScoreAvg Latency (s)Num Docs
Average86.8--------
ArxivQA87.63.2500
DocVQA54.82.9500
InfoVQA90.12.9500
Shift Project87.75.31000
Artificial Intelligence98.74.31000
Energy96.44.51000
Government Reports96.84.41000
Healthcare Industry98.54.51000
TabFQuad86.63.7280
TatDQA70.98.41663

Release 1.0.0

BenchmarkColivara ScoreAvg Latency (s)Num Docs
Average87.6--------
ArxivQA88.111.1500
DocVQA56.19.3500
InfoVQA91.48.6500
Shift Project91.316.81000
Artificial Intelligence99.512.81000
Energy96.314.11000
Government Reports96.714.01000
Healthcare Industry98.320.01000
TabFQuad86.38.1280
TatDQA71.720.01663

Features

Requirements

Installation

  1. Clone the repository:

    git clone https://github.com/tjmlabs/colivara-eval.git
    cd colivara-eval
    
  2. Install the dependencies:

    uv venv
    source venv/bin/activate
    uv sync
    
  3. Configure Environment Variables:

    • Create a .env file in the root directory.
    • Add the following variables:
      COLIVARA_API_KEY=your_api_key_here
      COLIVARA_BASE_URL=https://api.colivara.com
      
  4. Download the Dataset:

    • Download the dataset file(s) for evaluation.
    • Run the following command:
    python src/download_datasets.py
    

Usage

The Colivara Evaluation Project provides a streamlined interface for managing and evaluating document collections within Colivara. The primary entry points for usage are upsert.py for performing document upsert operations and evaluate.py for relevance evaluation.

Document Upsert with upsert.py

The upsert.py script enables you to upsert documents into Colivara collections. It allows selective processing of single datasets or batch processing across all available datasets, making it adaptable for various scenarios.

Key Arguments

Example Commands

1. Upserting a Single Dataset

To upsert documents from a specific dataset, run:

python upsert.py --specific_file arxivqa_test_subsampled.pkl --collection_name arxivqa_test_subsampled --upsert

This command will upsert all documents from arxivqa_test_subsampled.pkl into arxivqa_test_subsampled if it doesn’t already exist.

2. Upserting All Datasets

To upsert documents for all datasets:

python upsert.py --all_files --upsert

This command will loop through all datasets in DOCUMENT_FILES, upserting documents into their corresponding collections.

Relevance Evaluation with evaluate.py

The evaluate.py script is used to evaluate the relevance of document collections within Colivara.

Key Arguments

Example Commands

1. Evaluating a Single Collection

To evaluate the relevance of a specific collection, run:

python evaluate.py  --collection_name arxivqa_test_subsampled

This command will evaluate the specified collection and output the relevance metrics based on NDCG@5. Here is a list of our collection names that are aleady uploaded.

COLLECTION_NAMES = [
    "arxivqa_test_subsampled",
    "docvqa_test_subsampled",
    "infovqa_test_subsampled",
    "shiftproject_test",
    "syntheticDocQA_artificial_intelligence_test",
    "syntheticDocQA_energy_test",
    "syntheticDocQA_government_reports_test",
    "syntheticDocQA_healthcare_industry_test",
    "tabfquad_test_subsampled",
    "tatdqa_test",
]

2. Evaluating All Collections

To evaluate the relevance of all collections:

python evaluate.py --all_files

This command will perform a relevance evaluation (NDCG@5) on all datasets listed in DOCUMENT_FILES and save the results in the out/ directory:

Collection Management with collection_manager.py

The collection_manager.py script provides utilities for listing and deleting collections within Colivara.

Commands

File Structure

Configuration

The project configuration relies on environment variables defined in a .env file:

Use dotenv to load these configurations automatically, ensuring that sensitive information is securely managed.

Technical Details

Discounted Cumulative Gain (DCG)

DCG is a measure of relevance that considers the position of relevant results in the returned list. It assigns higher scores to results that appear earlier.

Normalized Discounted Cumulative Gain (NDCG)

NDCG normalizes DCG by dividing it by the ideal DCG (IDCG) for a given query, providing a score between 0 and 1. In this project, we calculate NDCG@5 to evaluate the top 5 search results for each query.

Search Query Evaluation

The evaluation process includes:

  1. Query Processing: Matching queries against document metadata.
  2. Relevance Scoring: Using true document IDs to calculate relevance scores.
  3. NDCG Calculation: Aggregating scores to calculate the average relevance.

Future Enhancements

  1. Parallel Processing: Optimize data loading and evaluation functions for concurrent processing.
  2. Extended Metrics: Add other evaluation metrics like Mean Reciprocal Rank (MRR).
  3. Benchmarking with Larger Datasets: Test Colivara's scalability with larger data volumes.
  4. Automated Testing: Integrate unit and integration tests for CI/CD compatibility.

License

This project is licensed under the MIT License - see the LICENSE file for details.