In the 2023 Global Large Language Model Competition, the performance of Chinese large language models has garnered significant attention. The Chinese University of Hong Kong has launched the CLEVA Chinese Evaluation Platform, which includes comprehensive evaluation tasks and metrics. CLEVA emphasizes various indicators such as accuracy, robustness, fairness, efficiency, calibration, and diversity. Additionally, it provides a variety of prompt templates to ensure the fairness of evaluations and the analysis of model performance. CLEVA also employs multiple methods to mitigate the risk of data contamination and offers an easy-to-use interface.