en
每月不到10元,就可以无限制地访问最好的AIbase。立即成为会员
Home
News
Daily Brief
Income Guide
Tutorial
Tools Directory
Product Library
en
Search AI Products and News
Explore worldwide AI information, discover new AI opportunities
AI News
AI Tools
AI Cases
AI Tutorial
Type :
AI News
AI Tools
AI Cases
AI Tutorial
2023-11-24 12:00:30
.
AIbase
.
3.5k
Oxford University AI Researcher Warns: Large Language Models Pose Risks to Scientific Truth
An AI researcher from Oxford University has pointed out that large language models (LLMs) may pose a threat to scientific integrity. The research calls for a change in the use of LLMs, suggesting they be used as 'zero-shot translators' to ensure factual accuracy in outputs. Relying on LLMs as a source of information could jeopardize scientific truth, leading to calls for responsible use of LLMs. The study warns that casual use of LLMs in scientific papers could result in significant harm.