The article thoroughly expounds on the essence and significance of AI value alignment. Value alignment ensures that the goals and behaviors of AI systems are consistent with human values and ethical principles, constituting a core issue in AI safety. As AI capabilities advance, value alignment becomes particularly crucial. It addresses issues such as misinformation, algorithmic bias, and失控capabilities in large models. Methods to achieve value alignment include reinforcement learning based on human feedback, which can train models that are more aligned with human values using a small amount of human feedback; constitutional AI, which uses AI to evaluate and optimize the outputs of other AI systems; and a combination of interventions in training data, red team testing, and other approaches. However, determining core human values requires more societal discussions. With the enhancement of AI capabilities, our ability to monitor AI also needs to improve to achieve value alignment.