Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
File size: 7,190 Bytes
3a86213
 
 
b34bf36
 
 
 
 
 
 
 
 
 
0a0fda7
b34bf36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a0fda7
 
3a86213
c4ce8aa
33b4dbb
c4ce8aa
 
33b4dbb
b34bf36
c4ce8aa
33b4dbb
c4ce8aa
33b4dbb
 
3a86213
 
 
c4ce8aa
 
 
 
3a86213
 
 
bb90154
47db488
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
dataset_info:
  features:
  - name: src
    dtype: string
  - name: ref
    dtype: string
  - name: translation
    dtype: string
  - name: mqm_norm_score
    dtype: string
  - name: da_norm_score
    dtype: string
  - name: error_spans
    list:
    - name: span_end_offset
      dtype: int64
    - name: span_no
      dtype: int64
    - name: span_severity
      dtype: string
    - name: span_start_offset
      dtype: int64
    - name: span_text
      dtype: string
    - name: span_type
      dtype: string
  - name: language
    dtype: string
  - name: split
    dtype: string
  splits:
  - name: test
    num_bytes: 2486339
    num_examples: 2380
  - name: validation
    num_bytes: 1032240
    num_examples: 1000
  - name: train
    num_bytes: 5473569
    num_examples: 4997
  download_size: 1831234
  dataset_size: 8992148
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
  - split: validation
    path: data/validation-*
  - split: train
    path: data/train-*
---

# IndicMT-Eval

This repository contains the code for the paper "IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation Metrics for Indian Languages" to appear at ACL 2023

## Contents

- [Overview](#overview)
- [MQM Dataset](#mqm-dataset)
- [How to use](#How-to-use)
- [Indic Comet](#indic-comet)
- [Other Metrics](#other-metrics)
- [Citation](#citation)

## Overview

We contribute a Multidimensional Quality Metric (MQM) dataset for Indian languages created by taking outputs generated by 7 popular MT systems and asking human annotators to judge the quality of the translations using the MQM style guidelines. Using this rich set of annotated data, we show the performance of 16 metrics of various types on evaluating en-xx translations for 5 Indian languages. We provide an updated metric called Indic-COMET which not only shows stronger correlations with human judgement on Indian languages, but is also more robust to perturbations. 

Please find more details of this work in our paper (link coming soon).

## MQM Dataset

The MQM annotated dataset collected with the help of language experts for the 5 Indian lamguages (Hindi, Tamil, Marathi, Malayalam, Gujarati) can be downloaded from here (link coming soon).

An example of an MQM annotation containing the source, reference and the translated output with error spans as demarcated by the annotator looks like the following:
![MQM-example](https://github.com/AI4Bharat/IndicMT-Eval/assets/23221743/0296986f-bb89-4044-88ef-b8fb71acf9ee)

More details regarding the instructions provided and the procedures followed for annotations are present in the paper.


### How to use

The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. 


- Before downloading first follow the following steps:

  1.  Gain access to the dataset and get the HF access token from: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
  2.  Install dependencies and login HF:
       - Install Python
       - Run `pip install librosa soundfile datasets huggingface_hub[cli]`
       - Login by `huggingface-cli login` and paste the HF access token. Check [here](https://huggingface.co/docs/huggingface_hub/guides/cli#huggingface-cli-login) for details.

For example:
```python
from datasets import load_dataset
ds = load_dataset("ai4bharat/IndicMTEval")
```


Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
ds = load_dataset("ai4bharat/IndicMTEval",streaming=True)
print(next(iter(ds)))
```

## Indic Comet

We load the pretrained encoder and initialize it with either XLM-Roberta, COMET-DA or COME-MQM weights. During training, we divide the model parameters into two groups: the encoder parameters, that include the encoder model and the regressor parameters, that include the parameters from the top feed-forward network. We apply gradual unfreezing and discriminative learning rates, meaning that the encoder model is frozen for one epoch while the feed-forward is optimized with a learning rate. After the first epoch, the entire model is fine-tuned with a different learning rate. Since we are fine-tuning on a small dataset, we make use of early stopping with a patience of 3. The best saved checkpoint is decided using the overall Kendall-tau correlation on the test set. We use the [COMET](https://github.com/Unbabel/COMET) repository for training and our checkpoints are compatible with their setup.

Download the best checkpoint here

| MQM | DA |
| ---- | --- |
| [indic-comet-mqm](https://objectstore.e2enetworks.net/indic-asr-public/data/anushka/comet_mqm_1.5e-5/comet_mqm_1.5e-5/checkpoints/epoch=2-step=1875-val_kendall=0.455.ckpt) | [indic-comet-da](https://objectstore.e2enetworks.net/indic-asr-public/data/anushka/comet_da_1.5e-5/comet_da_1.5e-5/checkpoints/epoch=3-step=2500-val_kendall=0.456.ckpt) |
| [hparams.yaml](https://objectstore.e2enetworks.net/indic-asr-public/data/anushka/comet_da_1.5e-5/comet_da_1.5e-5/hparams.yaml) | [hparamas.yaml](https://objectstore.e2enetworks.net/indic-asr-public/data/anushka/comet_mqm_1.5e-5/comet_mqm_1.5e-5/hparams.yaml) |

## Other Metrics

We followed the implementation of metrics with the help of the following repositories:
 For BLEU, METEOR, ROUGE-L, CIDEr, Embedding Averaging, Greedy Matching, and Vector Extrema, we use the implementation provided by [Sharma et al. (2017)](https://github.com/Maluuba/nlg-eval). For chrF++, TER, BERTScore, and BLEURT, we use the repository of [Castro Ferreira et al. (2020)](https://github.com/WebNLG/GenerationEval).  For SMS, WMDo, and Mover-Score, we use the implementation provided by [Fabbri et al. (2020)](https://github.com/Yale-LILY/SummEval). For all the remaining task-specific metrics, we use the official codes from the respective papers.
 
 <br>
 The python file code/evaluate.py runs all of these metrics on the given dataset.

## Citation
If you find IndicMTEval useful in your research or work, please consider citing our paper.
```
@article{DBLP:journals/corr/abs-2212-10180,
  author       = {Ananya B. Sai and
                  Tanay Dixit and
                  Vignesh Nagarajan and
                  Anoop Kunchukuttan and
                  Pratyush Kumar and
                  Mitesh M. Khapra and
                  Raj Dabre},
  title        = {IndicMT Eval: {A} Dataset to Meta-Evaluate Machine Translation metrics
                  for Indian Languages},
  journal      = {CoRR},
  volume       = {abs/2212.10180},
  year         = {2022}
}

@article{singh2024good,
  title={How Good is Zero-Shot MT Evaluation for Low Resource Indian Languages?},
  author={Singh, Anushka and Sai, Ananya B and Dabre, Raj and Puduppully, Ratish and Kunchukuttan, Anoop and Khapra, Mitesh M},
  journal={arXiv preprint arXiv:2406.03893},
  year={2024}
}
```