The Beijing Academy of Artificial Intelligence (BAAI) has released a code generation training set named TACO, aimed at providing more challenging training data and evaluation benchmarks for code generation models. TACO boasts advantages in data scale, quality, and evaluation schemes, including larger training and test sets, diverse problem-solving answers, and fine-grained labels. Experimental results show significant differences between currently popular code generation models and GPT-4 in TACO evaluations, indicating room for improvement in the field. TACO is not only a challenging testing method but also serves as training data to enhance model performance, promoting the development of the code generation domain.