Home

Awesome


benchmark: mteb type: evaluation submission_name: MTEB

[!NOTE]
Previously it was possible to submit models results to MTEB by adding the results to the model metadata. This is no longer an option as we want to ensure high quality metadata.

This repository contain the results of the embedding benchmark evaluated using the package mteb.

Reference
🦾 LeaderboardAn up to date leaderboard of embedding models
📚 mtebGuides and instructions on how to use mteb, including running, submitting scores, etc.
🙋 QuestionsQuestions about the results
🙋 IssuesIssues or bugs you have found