The article explores whether current mainstream research on aligning artificial intelligence can effectively prevent future catastrophic harm. The author argues that these studies primarily serve to enhance product performance and struggle to genuinely address deep-level safety issues. Preventing disasters requires broader engagement in discussions about AI ethics and governance. Public opinion is crucial, but it should not solely rely on those who benefit from AI deployment.
The Alignment of Human Values: How to Make AI Conform to Human Values? Are the Giants' Explorations for Products or Humanity?
巴比特资讯
32
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/2128