WMT24

translationVerified

WMT24 is the 2024 edition of the Workshop on Machine Translation, which provides a ranking of General Machine Translation systems and Large Language Models. The evaluation includes both automatic metrics and human evaluation, with the human evaluation considered the official ranking as it is superior to automatic metrics. The benchmark serves as a comprehensive evaluation platform for comparing the performance of various translation systems, including traditional MT systems and newer LLM-based approaches. The preliminary results are provided to participants to assist in their system submissions, while the final official ranking is determined through human evaluation.

Published: 2024
Score Range: 0-100
Top Score: 50.1

WMT24 Leaderboard

RankModelProviderScoreParametersReleasedType
1Gemma 3nGoogle
50.1
4B2025-05-20Multimodal

About WMT24

Methodology

WMT24 evaluates model performance using a standardized scoring methodology. Scores are reported on a scale of 0 to 100, where higher scores indicate better performance. For detailed information about the scoring system and methodology, please refer to the original paper.

Publication

This benchmark was published in 2024.Technical Paper

Related Benchmarks