In a recent study, the OpenAI research team launched a new benchmark called MLE-bench, aimed at assessing the performance of AI agents in machine learning engineering. This research particularly focuses on 75 machine learning engineering-related competitions from Kaggle, intending to test agents' abilities in various skills required in the real world, including model training, dataset preparation, and experiment execution. To facilitate better evaluation, the research team utilized foundational data from Kaggle's public leaderboard to establish performance metrics for each competition.