The Beijing Academy of Artificial Intelligence (BAAI) has released FlagPerf v1.0, an open-source AI hardware evaluation engine. This evaluation engine's metrics system includes functional correctness, performance, resource utilization, and ecosystem compatibility. FlagPerf covers over 20 classic models in areas such as natural language processing, computer vision, speech, and multimodal tasks, and has engaged in deep collaborations with multiple AI software and hardware vendors to adapt and evaluate different chips and frameworks. FlagPerf supports various training frameworks and inference engines to comprehensively assess the performance and applicability of AI chips. Throughout the evaluation process, FlagPerf rigorously reviews the submitted code to ensure fair results and an impartial process. All test code is open-source, and the testing process and data are reproducible.