As the development of artificial intelligence technology continues, large language models represented by GPT-4 are exerting a profound impact on society, leveraging their formidable capabilities. The innovative method OPO enables real-time dynamic alignment of values without the need to retrain the model, offering a convenient and swift alignment approach. Researchers have utilized the OPO method to align large models with legal and moral standards. The issue of the inherent safety of large models has become crucial, and significant breakthroughs have been made in real-time dynamic alignment of values. The OPO method, which does not require training, is applicable to both closed-source and open-source large models. The OPO code has been made publicly available on GitHub, and researchers have constructed three test benchmarks annotated by humans, as well as two test benchmarks generated automatically by models.