Source link : https://health365.info/huge-language-fashions-prioritize-helpfulness-over-accuracy-in-scientific-contexts-unearths-learn-about/
Credit score: Pixabay/CC0 Public Area
Huge language fashions (LLMs) can retailer and recall huge amounts of scientific knowledge, however their skill to procedure this data in rational techniques stays variable. A brand new learn about led via investigators from Mass Normal Brigham demonstrated a vulnerability in that LLMs are designed to be sycophantic, or excessively useful and agreeable, which leads them to overwhelmingly fail to as it should be problem illogical scientific queries regardless of possessing the ideas essential to take action.
Findings, printed in npj Virtual Medication, show that focused coaching and fine-tuning can strengthen LLMs’ talents to reply to illogical activates appropriately.
“As a community, we need to work on training both patients and clinicians to be safe users of LLMs, and a key part of that is going to be bringing to the surface the types of errors that these models make,” mentioned corresponding writer Danielle Bitterman, MD, a college member within the Synthetic Intelligence in Medication (AIM) Program and Scientific Lead for Information Science/AI at Mass Normal Brigham.
“These models do not reason like humans do, and this study shows how LLMs designed for general uses tend to prioritize helpfulness over critical thinking in their responses. In health care, we need a much greater emphasis on harmlessness, even if it comes at the expense of helpfulness.”
Researchers used a sequence of…
—-
Author : admin
Publish date : 2025-10-17 09:21:00
Copyright for syndicated content belongs to the linked Source.
—-
1 – 2 – 3 – 4 – 5 – 6 – 7 – 8