π ReMedy: Machine Translation Evaluation via Reward Modeling
Learning High-Quality Machine Translation Evaluation from Human Preferences with Reward Modeling
β¨ About ReMedy
ReMedy is a new state-of-the-art machine translation (MT) evaluation framework that reframes the task as reward modeling rather than direct regression. Instead of relying on noisy human scores, ReMedy learns from pairwise human preferences, leading to better alignment with human judgments.
- π State-of-the-art accuracy on WMT22β24 (39 language pairs, 111 systems)
- βοΈ Segment- and system-level evaluation, outperforming GPT-4, PaLM-540B, Finetuned-PaLM2, MetricX-13B, and XCOMET
- π More robust on low-quality and out-of-domain translations (ACES, MSLC benchmarks)
- π§ Can be used as a reward model in RLHF pipelines to improve MT systems
ReMedy demonstrates that reward modeling with pairwise preferences offers a more reliable and human-aligned approach for MT evaluation.
π Contents
- π¦ Quick Installation
- βοΈ Requirements
- π Usage
- βοΈ Full Argument List
- π§ Model Variants
- π Reproducing WMT Results
- π Citation
π¦ Quick Installation
ReMedy requires Python β₯ 3.10, and leverages VLLM for fast inference.
β Recommended: Install via pip
pip install remedy-mt-eval
git clone https://github.com/Smu-Tan/Remedy
cd Remedy
π οΈ Install from Source
git clone https://github.com/Smu-Tan/Remedy
cd Remedy
pip install -e .
π Install via Poetry
git clone https://github.com/Smu-Tan/Remedy
cd Remedy
poetry install
βοΈ Requirements
Python
β₯ 3.10transformers
β₯ 4.51.1vllm
β₯ 0.8.5torch
β₯ 2.6.0- (See
pyproject.toml
for full dependencies)
π Usage
πΎ Download ReMedy Models
Before using, download the model from HuggingFace:
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download ShaomuTan/ReMedy-9B-23 --local-dir Models/remedy-9B-23
You can replace ReMedy-9B-22
with other variants like ReMedy-9B-23
.
πΉ Basic Usage
remedy-score \
--model Models/remedy-9B-22 \
--src_file testcase/en.src \
--mt_file testcase/en-de.hyp \
--ref_file testcase/de.ref \
--src_lang en --tgt_lang de \
--cache_dir $CACHE_DIR \
--save_dir testcase \
--num_gpus 4 \
--calibrate
πΉ Reference-Free Mode (Quality Estimation)
remedy-score \
--model Models/remedy-9B-22 \
--src_file testcase/en.src \
--mt_file testcase/en-de.hyp \
--no_ref \
--src_lang en --tgt_lang de \
--cache_dir $CACHE_DIR \
--save_dir testcase \
--num_gpus 4 \
--calibrate
π Output Files
src-tgt_raw_scores.txt
src-tgt_sigmoid_scores.txt
src-tgt_calibration_scores.txt
src-tgt_detailed_results.tsv
src-tgt_result.json
Inspired by SacreBLEU, ReMedy provides JSON-style results to ensure transparency and comparability.
π Example JSON Output
{
"metric_name": "remedy-9B-22",
"raw_score": 4.502863049214531,
"sigmoid_score": 0.9613502018042875,
"calibration_score": 0.9029647169507162,
"calibration_temp": 1.7999999999999998,
"signature": "metric_name:remedy-9B-22|lp:en-de|ref:yes|version:0.1.1",
"language_pair": "en-de",
"source_language": "en",
"target_language": "de",
"segments": 2037,
"version": "0.1.1",
"args": {
"src_file": "testcase/en.src",
"mt_file": "testcase/en-de.hyp",
"src_lang": "en",
"tgt_lang": "de",
"model": "Models/remedy-9B-22",
"cache_dir": "Models",
"save_dir": "testcase",
"ref_file": "testcase/de.ref",
"no_ref": false,
"calibrate": true,
"num_gpus": 4,
"num_seqs": 256,
"max_length": 4096,
"enable_truncate": false,
"version": false,
"list_languages": false
}
}
βοΈ Full Argument List
π Show CLI Arguments
πΈ Required
--src_file # Path to source file
--mt_file # Path to MT output file
--src_lang # Source language code
--tgt_lang # Target language code
--model # Model path or HuggingFace ID
--save_dir # Output directory
πΈ Optional
--ref_file # Reference file path
--no_ref # Reference-free mode
--cache_dir # Cache directory
--calibrate # Enable calibration
--num_gpus # Number of GPUs
--num_seqs # Number of sequences (default: 256)
--max_length # Max token length (default: 4096)
--enable_truncate # Truncate sequences
--version # Print version
--list_languages # List supported languages
π§ Model Variants
Model | Size | Base Model | Ref/QE | Download |
---|---|---|---|---|
ReMedy-9B-22 | 9B | Gemma-2-9B | Both | π€ HuggingFace |
ReMedy-9B-23 | 9B | Gemma-2-9B | Both | π€ HuggingFace |
ReMedy-9B-24 | 9B | Gemma-2-9B | Both | π€ HuggingFace |
More variants coming soon...
π Reproducing WMT Results
Click to show instructions for reproducing WMT22β24 evaluation
1. Install mt-metrics-eval
git clone https://github.com/google-research/mt-metrics-eval.git
cd mt-metrics-eval
pip install .
2. Download WMT evaluation data
python3 -m mt_metrics_eval.mtme --download
3. Run ReMedy on WMT data
bash wmt/wmt22.sh
bash wmt/wmt23.sh
bash wmt/wmt24.sh
π Results will be comparable with other metrics reported in WMT shared tasks.
π Citation
If you use ReMedy, please cite the following paper:
@article{tan2024remedy,
title={ReMedy: Learning Machine Translation Evaluation from Human Preferences with Reward Modeling},
author={Tan, Shaomu and Monz, Christof},
journal={arXiv preprint},
year={2024}
}
- Downloads last month
- 517