AI researchers at the University of Oxford have issued a warning in their latest study, stating that large language models could pose a threat to scientific integrity. The study calls for a change in the way LLMs are used, suggesting they be employed as "zero-shot translators" to ensure the factual accuracy of the output. However, relying on LLMs as a source of information could undermine scientific truth, hence the call for responsible use of LLMs. The research also warns that if LLMs are used indiscriminately in the generation and dissemination of scientific articles, it could lead to serious harm.
Oxford University AI Researcher Warns: Large Language Models Pose Risks to Scientific Truth
站长之家
32
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/3518