***AI Researchers Issue Startling New Warning***
Researchers are issuing a frightening caution about Large Language Models (LLMs) like those used in chatbots. These LLMs have a tendency to hallucinate, creating false content that they present as true. The significant risk these AI hallucinations pose to science and scientific truth is raising eyebrows within the academic community.
According to a new study published in Nature Human Behaviour, LLMs are designed to offer helpful and convincing responses without any guarantees regarding their accuracy or alignment with factual truth. The training data used to build these models often contains false statements, opinions, and inaccurate information, making it a direct threat to information accuracy, especially in scientific and educational fields.
Researchers are concerned that users often trust LLMs as a reliable source of knowledge and anthropomorphize the technology, leading them to believe the information provided is accurate, even when it is not. The Oxford Internet Institute’s Professor Brent Mittelstadt, co-author of the paper, urges the scientific community to use LLMs responsibly as “zero-shot translators,” providing the model with the appropriate data and asking it to transform it into a conclusion or code to ensure factual correctness.
The study contends that while LLMs will certainly assist with scientific workflows, it’s crucial for the scientific community to use them responsibly and maintain clear expectations on how they can contribute to science and education. Overall, the researchers’ warnings highlight the pressing need to address the dangers of relying on LLMs as sources of knowledge and information.
What are your thoughts on the risks LLMs pose to scientific truth? How do you think the academic community can address this issue? Let us know in the comments below!
IntelliPrompt curated this article: Read the full story at the original source by clicking here