WMT24
WMT24 is the 2024 edition of the Workshop on Machine Translation, which provides a ranking of General Machine Translation systems and Large Language Models. The evaluation includes both automatic metrics and human evaluation, with the human evaluation considered the official ranking as it is superior to automatic metrics. The benchmark serves as a comprehensive evaluation platform for comparing the performance of various translation systems, including traditional MT systems and newer LLM-based approaches. The preliminary results are provided to participants to assist in their system submissions, while the final official ranking is determined through human evaluation.
WMT24 Leaderboard
Rank | Model | Provider | Score | Parameters | Released | Type |
---|---|---|---|---|---|---|
1 | Gemma 3n | 50.1 | 4B | 2025-05-20 | Multimodal |
About WMT24
Description
WMT24 is the 2024 edition of the Workshop on Machine Translation, which provides a ranking of General Machine Translation systems and Large Language Models. The evaluation includes both automatic metrics and human evaluation, with the human evaluation considered the official ranking as it is superior to automatic metrics. The benchmark serves as a comprehensive evaluation platform for comparing the performance of various translation systems, including traditional MT systems and newer LLM-based approaches. The preliminary results are provided to participants to assist in their system submissions, while the final official ranking is determined through human evaluation.
Methodology
WMT24 evaluates models on a scale of 0 to 100. Higher scores indicate better performance. For detailed information about the methodology, please refer to the original paper.
Publication
This benchmark was published in 2024.Read the full paper