ALMA-13B-R
Advanced Machine Translation Model
CommonProductProductivityMachine translationModel fine-tuning
The ALMA-R model has been further fine-tuned using Contrastive Preference Optimization (CPO), surpassing GPT-4 and WMT award winners. Users can download the ALMA(-R) model and datasets from the GitHub repository. ALMA-R builds upon the ALMA model and employs our proposed Contrastive Preference Optimization (CPO) for fine-tuning, instead of the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our triplet preference data for preference learning. ALMA-R can now match or even surpass the performance of GPT-4 or WMT award winners!
ALMA-13B-R Visit Over Time
Monthly Visits
20899836
Bounce Rate
46.04%
Page per Visit
5.2
Visit Duration
00:04:57