ALMA-13B-R

Advanced Machine Translation Model

CommonProductProductivityMachine translationModel fine-tuning
The ALMA-R model has been further fine-tuned using Contrastive Preference Optimization (CPO), surpassing GPT-4 and WMT award winners. Users can download the ALMA(-R) model and datasets from the GitHub repository. ALMA-R builds upon the ALMA model and employs our proposed Contrastive Preference Optimization (CPO) for fine-tuning, instead of the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our triplet preference data for preference learning. ALMA-R can now match or even surpass the performance of GPT-4 or WMT award winners!
Visit

ALMA-13B-R Visit Over Time

Monthly Visits

17104189

Bounce Rate

44.67%

Page per Visit

5.5

Visit Duration

00:05:49

ALMA-13B-R Visit Trend

ALMA-13B-R Visit Geography

ALMA-13B-R Traffic Sources

ALMA-13B-R Alternatives