Update README.md
Browse files
README.md
CHANGED
@@ -48,6 +48,8 @@ SparseEncoder(
|
|
48 |
**PIXIE-Splade-Preview** delivers consistently strong performance across a diverse set of domain-specific and open-domain benchmarks in Korean, demonstrating its effectiveness in real-world search applications.
|
49 |
The table below presents the retrieval performance of several embedding models evaluated on a variety of Korean MTEB benchmarks.
|
50 |
We report Normalized Discounted Cumulative Gain (NDCG) scores, which measure how well a ranked list of documents aligns with ground truth relevance. Higher values indicate better retrieval quality.
|
|
|
|
|
51 |
All evaluations were conducted using the open-source **[Korean-MTEB-Retrieval-Evaluators](https://github.com/BM-K/Korean-MTEB-Retrieval-Evaluators)** codebase to ensure consistent dataset handling, indexing, retrieval, and NDCG@k computation across models.
|
52 |
|
53 |
### 7 Datasets of MTEB (Korean)
|
|
|
48 |
**PIXIE-Splade-Preview** delivers consistently strong performance across a diverse set of domain-specific and open-domain benchmarks in Korean, demonstrating its effectiveness in real-world search applications.
|
49 |
The table below presents the retrieval performance of several embedding models evaluated on a variety of Korean MTEB benchmarks.
|
50 |
We report Normalized Discounted Cumulative Gain (NDCG) scores, which measure how well a ranked list of documents aligns with ground truth relevance. Higher values indicate better retrieval quality.
|
51 |
+
- **Avg. NDCG**: Average of NDCG@1, @3, @5, and @10 across all benchmark datasets.
|
52 |
+
- **NDCG@k**: Relevance quality of the top-*k* retrieved results.
|
53 |
All evaluations were conducted using the open-source **[Korean-MTEB-Retrieval-Evaluators](https://github.com/BM-K/Korean-MTEB-Retrieval-Evaluators)** codebase to ensure consistent dataset handling, indexing, retrieval, and NDCG@k computation across models.
|
54 |
|
55 |
### 7 Datasets of MTEB (Korean)
|