Contrastive Preference Optimization is an innovative method for machine translation that trains models to avoid generating merely adequate but imperfect translations, resulting in a significant performance boost for the ALMA model. This method achieves or surpasses the performance of WMT competition winners and GPT-4 on WMT'21, WMT'22, and WMT'23 test datasets.