ALMA-13B-R

Advanced Machine Translation Model

CommonProductProductivityMachine translationModel fine-tuning
The ALMA-R model has been further fine-tuned using Contrastive Preference Optimization (CPO), surpassing GPT-4 and WMT award winners. Users can download the ALMA(-R) model and datasets from the GitHub repository. ALMA-R builds upon the ALMA model and employs our proposed Contrastive Preference Optimization (CPO) for fine-tuning, instead of the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our triplet preference data for preference learning. ALMA-R can now match or even surpass the performance of GPT-4 or WMT award winners!
Visit

ALMA-13B-R Visit Over Time

Monthly Visits

17788201

Bounce Rate

44.87%

Page per Visit

5.4

Visit Duration

00:05:32

ALMA-13B-R Visit Trend

ALMA-13B-R Visit Geography

ALMA-13B-R Traffic Sources

ALMA-13B-R Alternatives