Prometheus-Eval is an open-source toolkit designed to assess the performance of large language models (LLMs) in generation tasks. It provides a straightforward interface for evaluating instructions and responses using the Prometheus model. The Prometheus 2 model supports direct evaluation (absolute scoring) and paired ranking (relative scoring), which can simulate human judgment and proprietary language model-based evaluation, addressing issues of fairness, control, and affordability.