The article explores whether current mainstream research on aligning artificial intelligence can effectively prevent future catastrophic harm. The author argues that these studies primarily serve to enhance product performance and struggle to genuinely address deep-level safety issues. Preventing disasters requires broader engagement in discussions about AI ethics and governance. Public opinion is crucial, but it should not solely rely on those who benefit from AI deployment.