Awesome
benchmark: mteb type: evaluation submission_name: MTEB
[!NOTE]
Previously it was possible to submit models results to MTEB by adding the results to the model metadata. This is no longer an option as we want to ensure high quality metadata.
This repository contain the results of the embedding benchmark evaluated using the package mteb
.
Reference | |
---|---|
🦾 Leaderboard | An up to date leaderboard of embedding models |
📚 mteb | Guides and instructions on how to use mteb , including running, submitting scores, etc. |
🙋 Questions | Questions about the results |
🙋 Issues | Issues or bugs you have found |